Professional Documents
Culture Documents
Scott Aaronson makes the case for being less than maximally hostile to AI
development:
Here’s an example I think about constantly: activists and intellectuals of the 70s and
80s felt absolutely sure that they were doing the right thing to battle nuclear power.
At least, I’ve never read about any of them having a smidgen of doubt. Why would
they? They were standing against nuclear weapons proliferation, and terrifying
meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning
the water and soil and causing three-eyed fish. They were saving the world. Of
course the greedy nuclear executives, the C. Montgomery Burnses, claimed that
their good atom-smashing was different from the bad atom-smashing, but they
would say that, wouldn’t they?
We now know that, by tying up nuclear power in endless bureaucracy and driving
its cost ever higher, on the principle that if nuclear is economically competitive then
it ipso facto hasn’t been made safe enough, what the antinuclear activists were
really doing was to force an ever-greater reliance on fossil fuels. They thereby
created the conditions for the climate catastrophe of today. They weren’t saving the
human future; they were destroying it. Their certainty, in opposing the march of a
particular scary-looking technology, was as misplaced as it’s possible to be. Our
descendants will suffer the consequences.
Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions,
he’s more arguing against people who say that AIs should be banned because they
might spread misinformation or gaslight people or whatever.
Still, I think about this argument a lot. I agree he’s right about nuclear power. When it
comes out in a few months, I’ll be reviewing a book that makes this same point about
institutional review boards: that our fear of a tiny handful of deaths from unethical
science has caused hundreds of thousands of deaths from delaying ethical and life-
saving medical progress. The YIMBY movement makes a similar point about housing:
we hoped to prevent harm by subjecting all new construction to a host of different
reviews - environmental, cultural, equity-related - and instead we caused vast harm by
creating an epidemic of homelessness and forcing the middle classes to spend
increasingly unaffordable sums on rent. This pattern typifies the modern age; any
attempt to restore our rightful utopian flying-car future will have to start with rejecting
it as vigorously as possible.
So how can I object when Aaronson turns the same lens on AI?
First, you are allowed to use Inside View. If Osama bin Laden is starting a supervirus
lab, and objects that you shouldn’t shut him down because “in the past, shutting down
progress out of exaggerated fear of potential harm has killed far more people than the
progress itself ever could”, you are permitted to respond “yes, but you are Osama bin
Laden, and this is a supervirus lab.” You don’t have to give every company trying to
build the Torment Nexus a free pass just because they can figure out a way to place
their work in a reference class which is usually good. All other technologies fail in
predictable and limited ways. If a buggy AI exploded, that would be no worse than a
buggy airplane or nuclear plant. The concern is that a buggy AI will pretend to work
well, bide its time, and plot how to cause maximum damage while undetected. Also it’s
smarter than you. Also this might work so well that nobody realizes they’re all buggy
until there are millions of them.
But maybe opponents of every technology have some particular story why theirs is a
special case. So let me try one more argument, which I think is closer to my true
objection.
There’s a concept in finance called Kelly betting. It briefly gained some fame last year
as a thing that FTX failed at, before people realized FTX had failed at many more
fundamental things. It works like this (warning - I am bad at math and may have gotten
some of this wrong): suppose you start with $1000. You’re at a casino with one game:
you can, once per day, bet however much you want on a coin flip, double-or-nothing.
You’re slightly psychic, so you have a 75% chance of guessing the coin flip right. That
means that on average, you’ll increase your money by 50% each time you bet. Clearly
this is a great opportunity. But how much do you bet per day?
Tempting but wrong answer: bet all of it each time. After all, on average you gain
money each flip - each $1 invested in the coin flip game becomes $1.50. If you bet
everything, then after five coin flips you’ll have (on average) $7,500. But if you just bet
$1 each time , then (on average), you’ll only have $1,008. So obviously bet as much as
possible, right?
But after five coin flips of $1000, there’s an 76% chance that you’ve lost all your
money. Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost
all your money. So although technically this has the highest “average utility”, all of this
is coming from one super-amazing sliver of probability-space where you own more
money than exists in the entire world. In every other timeline, you’re broke.
So how much should you bet? $1 is too little. These flips do, on average, increase your
money by 50%; it would take forever to get anywhere betting $1 at a time. You want
something that’s high enough to increase your wealth quickly, but not so high that it’s
devastating and you can’t come back from it on the rare occasions when you lose.
In this case, if I understand the Kelly math right, you should bet half each time. But the
lesson I take from this isn’t just the exact math. It’s: even if you know a really good bet,
don’t bet everything at once.
Science and technology are great bets. Their benefits are much greater than their
harms. Whenever you get a chance to bet something significantly less than everything
in the world on science or technology, you should take it. Your occasional losses will
be dwarfed by your frequent and colossal gains. If we’d gone full-speed-ahead on
nuclear power, we might have had one or two more Chernobyls - but we’d save the
tens of thousands of people who die each year from fossil-fuel-pollution-related
diseases, end global warming, and have unlimited cheap energy.
Society (mostly) recovered from all of these. A world where people invent gasoline and
refrigerants and medication (and sometimes fail and cause harm) is vastly better than
one where we never try to have any of these things. I’m not saying technology isn’t a
great bet. It’s a great bet!
But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a
technology that could destroy the world is betting 100%.
It’s not that you should never do this. Every technology has some risk of destroying
the world; the first time someone tried vaccination, there was an 0.000000001%
chance it could have resulted in some weird super-pathogen that killed everybody. I
agree with Scott Aaronson: a world where nobody ever tries to create AI at all, until we
die of something else a century or two later, is pretty depressing.
But we have to consider them differently than other risks. A world where we try ten
things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is
probably a world where a handful of people have died in freak accidents but everyone
else lives in safety and abundance.
A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so
much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all
dead.
Upgrade to paid
566 Comments
Write a comment…
Chronological
Kevin Mar 7
It all depends on what you think the odds of a killer AI are. If you think it's 50-50, yeah it makes
sense to oppose AI research. If you think there's a one in a million chance of a killer AI, but a 10%
chance that global nuclear war destroys our civilization in the next century, then it doesn't really
make sense to let the "killer AI" scenario influence your decisions at all.
REPLY (2)
> It’s not that you should never do this. Every technology has some risk of destroying the
world; the first time someone tried vaccination, there was an 0.000000001% chance it could
have resulted in some weird super-pathogen that killed everybody.
Which I understood to mean that we shouldn't care about small probabilities. Or did you
understand that paragraph differently?
REPLY (1)
> But we have to consider them differently than other risks. A world where we try ten
things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is
probably a world where a handful of people have died in freak accidents but everyone
else lives in safety and abundance.
> A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so
much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all
dead.
So the trouble comes in asking “*whose odds*” is any given person allowed to use when
“Kelly betting civilization?” Their own?
Until and unless we can get coordinated global rough consensus on the actual odds of AI
apocalypse, I predict we’ll continue to see people effectively Kelly betting on AI using
their own internal logic.
REPLY
smopecakes Mar 8
I think there is a hardware argument for initial AI acceleration reducing odds of a killer AI. It's
extremely likely that eventually someone will build AIs significantly more capable than there is
currently the possibility of. We should lean into early AI adoption now while hardware limits are
at their maximum. This increases the chance that we will observe unaligned AIs fail to actually
do anything including remaining under cover which provides alignment experience and broad
social warning about the general risk
REPLY (1)
gbear605 Mar 7
Nuclear nonproliferation seems to have actually done a pretty good job. Yes, North Korea has
nuclear weapons, and Iraq and Iran have been close, but Osama bin Laden notably did not
have nuclear weapons. 9-11 would have been orders of magnitude worse if they had set off a
nuclear weapon in the middle of New York instead of just flying a plane into the World Trade
Center. And some technologies, like chemical weapons, have been not used because we did a
good job at convincing everyone that we shouldn’t use them. International cooperation is
possible.
REPLY (7)
o11o1 Mar 7
I don't think that North Korea is feasibly in the race for AI at the moment.
Even Chinese have to put a lot of worry into obeying the rules of the CCP Censors, so
I expect them to be a lot less "Race-mode" and a lot more security-mindset focused
on making sure they have really good shackles on their AI projects.
MM Mar 7
Chemical weapons that were used did not even solve the problem they were intended
to: that of clearing the trenches far enough back to turn trench warfare into a war of
maneuver. The damage they did was far too localized.
This hasn't really changed in the intervening years - the chemicals get more lethal
and persistent, but they don't spread any better from each bomb.
Wars moved on from trenches (to the extent they did) because of different
technologies and doctrines (adding enough armored vehicles and the correct ways
to use them).
REPLY (1)
Ian Mar 7
Yeah, tanks get all the credit for their cool battles, but as an HOI4 player will
tell you, it's trucks that let you really fight a war of manuver. Gas might have
a bigger role in "linebreaking" if Tanks hadn't been invented.
REPLY
Did roads get enough better in the intervening 20 years in the areas of
France to make trucks practical? I do know that part of WWI was that the
defender could build a small rail behind the front faster than the attacker
could build a rail to supply any breakthrough. Does that apply with trucks -
were they actually good enough to get through trenchworks?
Or did the trenchworks just not end up being built in WWII - i.e. the lines
didn't settle down long enough to build them in the first place?
REPLY (1)
Having vehicles with certainly helps and means you can use them
during the attack instead of just when advancing afterwards but
engineers can fill in a trench pretty to let trucks drive over. They can't
build railroads quickly though, especially not faster than a man on foot
can advance.
REPLY
TGGP Mar 7
Greg Cochran suspects that Stalin used bioweapons against the Germans, without
the rest of the world finding out.
https://westhunt.wordpress.com/2012/02/02/war-in-the-east/
https://westhunt.wordpress.com/2016/09/19/weaponizing-smallpox/
https://westhunt.wordpress.com/2016/11/27/last-ditch/
REPLY
Matthieu Mar 7
> You could do more damage by for example driving a truck into a crowd.
https://en.wikipedia.org/wiki/2016_Nice_truck_attack
REPLY
P.S: and implying that a proliferated world would have made 9/11 (or another attack)
nuclear is unsubstantiated. Explosives are a totally proliferated technology. The only thing
stopping a terrorist from detonating a MOAB-like device is the physical constraint of
assembling it (ok, not entirely, I have no idea how reproducible is H-6 by non-state actors.
But TNT absolutely is, so something not-quite-moab-like-but-still-huge-boom is
theoritically possible). And yet for 9/11, they resorted to driving planes into the building,
because even tho the technology proliferated, it's still a hurdle to use it.
REPLY (1)
Gbdub Mar 7
There’s a good chance that Iraq (or at least Saddam) would not have existed to be
nuking Abrams tanks in 1991 or 2003, because Iran and Iraq would have nuked each
other in the 1980s.
REPLY (2)
Eh Mar 8
Or maybe they wouldn’t had gone to war at all knowing that it would have been a
lose-lose scenario. One wonders whether a world with massive proliferation
would have been a safer one.
REPLY (1)
Gbdub Mar 8
Possible. I was mostly peeved by what I perceived as a cheap anti-American
swipe rather than a reasoned assessment of when Saddam would use
nukes (besides that, it’s unclear whether nuking an Abrams formation would
even be all that useful - especially when all that soft targets that would get
hit in retaliation are considered)
REPLY
Doug S. Mar 8
Or Iraq and Israel. Tel Aviv is high on the list of cities most likely to be destroyed
by a nuke...
REPLY
Lupis42 Mar 7
Chemical weapons have been used, even in recent years by major state actors (e.g.
Russia, Syria). They don't get used more because they aren't that useful, and that offers a
clue to the problem.
REPLY
-----------------------
[1] Of which they had quite a lot. Something like ~1,500 deliverable warheads, the 3rd
largest arsenal in the world.
[2] It's more complicated than this in the real world, of course. Russia did not turn
over the launch procedures and codes, so it would've been a lot of work for Ukraine
to gain operational control over the weapons, even though they had de facto physical
custody fo them.
REPLY (1)
The Ukrainians had zero deliverable warheads in 1994. Those warheads stopped
being deliverable the moment the Russians took their toys and went home, and it
would have taken at least six months for the Ukrainians to change that. Which
would not have gone unnoticed, and would have resulted in all of those
warheads being either seized by the VDV or bombed to radioactive scrap by the
VVS while NATO et al said "yeah, we told the Ukrainians we weren't going to
tolerate that sort of proliferation, but did they listen?"
REPLY
Doug S. Mar 8
Eh, the biggest reason chemical weapons aren't used is because they kind of suck at
being weapons. It turns out it's cheaper and more reliable to kill soldiers with explosives.
REPLY
Aapje Mar 8
The question is what the risk of AI is. If AI is 'merely' a risk to the systems that we put it in
control of, and what is at risk from those systems, then N-Korean AI is surely not going to
be a direct threat, as we won't put it in control of our systems.
Cjw Mar 7
If the Allies in 1944 had taken the top ~500 physicists in the world and exiled them to one of
the Pitcairn Islands, how long would that have delayed the A-bomb? Surely a few decades or
more if we chose them wisely, and pressure behind the scenes could have deterred
collaboration by the younger generation on that tech.
Instead we used the bomb to secure FDR’s and the internationalists’ preferred post-war order
and relied on that arrangement to control nuclear proliferation. And fortunately, they actually
kinda managed it about as well as possible.
But that has given people false confidence that this present world order can always keep tech
out of the hands of those who would challenge it. They don’t seem to have given any effort or
thought to preventing this tech from being created, only to get there first and control it as if
every dangerous tech is exactly analogous to the A-bomb and that’s all you have to do to
manage risk.
And they do this even though the entire field seems to talk constantly about how there’s a high
chance it will destroy us all.
REPLY
James Mar 8
I think the morality of the inventor is germane to the discussion. Replace Osama with SBF. We
wouldn't trust someone with a history of building nefarious back doors in software programs to
lead AI development.
REPLY
G. Retriever Mar 7
I am still completely convinced that the lab leak "theory" is a special case of the broader
phenomenon of pareidolia, but gain-of-function research objectively did jack shit to help in an
actual pandemic, so we should probably quit doing it, because the upside seems basically
nonexistent.
REPLY (3)
Josaphat Mar 7
What if Omicron was “leaked”
Millions saved.
REPLY (1)
G. Retriever Mar 7
And, as the old vulgarity has it, if your aunt had balls she'd be your uncle.
REPLY (1)
My guess would be in at least the 20s percentage wise. An open market on Manifold says
73% right now, which is higher than I would have guessed, but not crazy high IMO. And
the scientific consensus simply isn't that reliable because very early on they showed
themselves to be full of shit on this issue.
REPLY (4)
Jtown Mar 7
https://2017-2021.state.gov/fact-sheet-activity-at-the-wuhan-institute-of-
virology/index.html
According to this US government fact sheet, "The WIV has a published record of
conducting 'gain-of-function' research to engineer chimeric viruses."
REPLY (1)
Jtown Mar 7
Ah, I see. But putting aside the DOE report, the WIV is implicated by
many proponents of the lab leak theory, right? I hadn't heard any
mention of the Wuhan CDC in these discussions before, but maybe I
wasn't following very closely.
REPLY (2)
G. Retriever Mar 8
In that hypothetical case, I would still count that as a natural
transmission, just as much as if a vendor had brought a bat to
the market himself.
REPLY
https://theracket.news/p/there-is-no-lab-leak-theory
https://medium.com/microbial-instincts/the-case-against-the-lab-
leak-theory-f640ae1c3704
It is possible that one of the many lab leak theories will ultimately
be proven true, but most of them will have to fail, since the
theories don't agree on the month it started, the lab it started in,
the means by which the virus was created, and so on.
REPLY (1)
G. Retriever Mar 8
The original sin of the lab leak theory is that the conclusion
was reached first, and observations have been used to
backfill the evidence.
G. Retriever Mar 9
Because evolution doesn't require an intentional act
by a human or human-like actor, and we have a
serious problem with overweighting priors that
involve intentional human actors.
REPLY
Mallard Mar 8
20% * ~ 20 million = 4 million deaths thus far, which seems quite catastrophic.
Godshatter Mar 8
Mallard is already accounting for the uncertainty over whether GoF
research started the pandemic – that's why they multiplied by 20%.
Obviously you might disagree that 20% is an appropriate guess at the
probability.
REPLY (1)
G. Retriever Mar 8
I consider that a gross abuse of probability. You can't multiply a
known fact by a hypothetical and do anything useful with the
result. Otherwise expired lottery tickets would still have residual
value.
REPLY (1)
Matt Mar 9
The ticket (lab leak) isn't expired though, it's currently
unknown whether it's true or false. This is more like
multiplying the value of a lotto jackpot (known fact) by the
expected probability of your ticket winning before the drawing
(probability lab leak is true, in which case the "value" of the
lives lost is assigned to it). Which is a perfectly valid way to
figure out the expected value of a lotto ticket. Unless you
think it's been determined with certainty that the lab leak
theory is false (the ticket is expired), but most people don't
think that.
REPLY
Michael Mar 9
There's something off about assigning blame for 1/5th the deaths to a
group who may not have done anything wrong. It's like if police found you
near the scene of a murder, decided there was a 20% chance you
committed it, and assigned you 20% of the guilt.
If a lab was doing gain-of-function research in a risky way that had a 20%
chance causing an outbreak, it makes sense to blame them for the
expected deaths (regardless of whether the outbreak actually happens).
But if the lab was only doing safe and responsible research and an
unrelated natural outbreak occurred, and we as outsiders with limited
information can't rule out the lab... then I'm not so sure.
You'd also have weigh against the potential benefits of this research, which
is even harder to estimate. What are the odds that research protect us from
future pandemics and potentially save billions of lives? Who knows.
REPLY (3)
And if you think that what you are doing is so massively beneficial that
it's worth killing an estimated 10,000+ innocent people, that's not a
decision you should be making privately and/or jurisdiction-shopping
for someone who will say it's OK and hand you the money. The lack of
transparency here is alarming.
REPLY
Sebastian Mar 9
> It's like if police found you near the scene of a murder, decided there
was a 20% chance you committed it, and assigned you 20% of the
guilt.
It's not similar at all. Research is not a human being and therefore
doesn't have a right to exist or to not 'suffer' under 20% guilt.
Completely different cases.
REPLY (1)
Michael Mar 10
I'd say by the same argument, it's pointless to assign "guilt" to a
type of research. Instead, we're trying to figure out whether this
research will save more lives or QALYs than it harms going
forward.
Say you're playing a game with positive expected value. You have
to roll a fair die. If you roll a one, you lose $10, otherwise you win
$10. You figure that's a good deal, so you roll, and you get a one.
You decide playing was a mistake.
This includes Fauci. And that's the reason so many people, if mostly conservatives,
are upset about his leadership. Not masks or other crap (those came later), but
because he knew about GOF research - having approved the funding for it - and
actively lied about it. When he lied about it, it became verboten to speak of the
possibility that a lab leak was involved.
REPLY (1)
G. Retriever Mar 8
And I'm still upset about Chris "Squi" Garrett and Brett Kavanaugh lying in his
confirmation hearing, but nobody else gives a shit and the world has moved on.
REPLY
Jtown Mar 7
My understanding is that the main "slam dunk" piece of evidence in favor of zoonotic
origin is the study (studies?) showing the wet market as the epicenter of early cases.
I'm curious how the lab leak theory is seen as so likely by e.g. Metaculus in view of
this particular piece of evidence (personally I'm at maybe 10%). The virus spilled over
at WIV, but then the first outbreak occurred across town at this seafood market
where wild game is sold? Or the data was substantially faked?
REPLY (4)
o11o1 Mar 7
If it was an accidental release (IE it leaked out of containment undetected by the
researchers), all that would have to happen is for the affected researcher to go
buy fish on the way home and then not fess up to it later. "Case negative one" if
you will.
REPLY (2)
Ryan L Mar 7
I'm not an epidemiologist, but it seems like a lot more would have to happen
than this hypothetical lab worker buying some fish.
And if they weren't a super-spreader, why did just going to the market to
buy fish seed so many cases? I suppose someone else that they infected
could have become a super-spreader, but this starts to feel like adding
epicycles to me.
REPLY (2)
Of course there are other possibilities too, like someone selling dead
test animals that they don't think are dangerous at the market for a
quick buck.
But given the circumstances I wouldn't hold out too much hope of ever
being sure about this.
REPLY
Aapje Mar 8
Yes, loud talking.
Michael Mar 7
The suspicious part is that this person only infected people at the market
and didn't seem to spread it to anyone around the WIV (or anywhere else).
Possible, but it makes the market look more likely.
Also, the market is fairly far from the WIV. That's not a big problem for the
theory; the infected researched might live near the market. But presumably
only a small percentage of the researchers there live near the market and I
think this reduces the likelihood somewhat.
REPLY
I think there are, and have been from the beginning, two strong reasons to
believe in the lab leak theory. The first is that Covid is a bat virus that first
showed up in a city that contained a research facility working with bat viruses.
That is an extraordinarily unlikely coincidence if there is no connection. The
second is that all of the people in a position to do a more sophisticated analysis
of the evidence, looking at the details of the structure of the virus, were people
with a very strong incentive not to believe, or have other people believe, in the
lab leak theory, since if it was true their profession, their colleagues, in some
cases researchers they had helped fund, were responsible for a pandemic that
killed millions of people.
REPLY (3)
Ryan L Mar 7
I'm not sure that either of these are "strong" reasons to believe in the lab
leak theory.
I've seen many people casually assert that COVID arising in the same city as
a virology institute is "extraordinarily unlikely", but I have yet to see anyone
quantify this. I'm not an epidemiologist, but I would think that epidemics are
more likely to start in cities due to large populations (more people who can
get sick), and high population density (easier to transmit). How many large
cities have places where people come in to close contact with animals that
can carry coronaviruses? Maybe Wuhan is one of 1000s of such places, in
which case, OK, it at least raises some eyebrows. But if it's one of a handful,
even one of dozens of such places, then the coincidence doesn't seem that
strange to me.
It isn't enough for one expert to disagree unless he has a proof that
non-experts can evaluate. In a dispute among experts it's more
complicated than that. One side says "Here are reasons 1, 2, and 3 to
believe it was animal to human transmission." The other side says
"here is why your 3 reasons don't show that, and here are four other
reasons to believe it was a lab leak." The first side includes Fauci and
the people under him, the people he has helped to fund, and the
people he has gotten to support his story because they want everyone
to believe it wasn't a lab leak. The other side is two or three honest
virologists.
Which side do you think looks more convincing to the lay public?
REPLY
*I haven't followed the origin hunt very closely because I doubted sufficient
evidence exists to resolve the answer either.
REPLY (1)
https://www.science.org/doi/10.1126/science.abm4454
REPLY (1)
Cited to: Fujiyama, Emily Wang; Moritsugu, Ken (11 February 2021).
"EXPLAINER: What the WHO coronavirus experts learned in Wuhan".
Associated Press. Retrieved 14 April 2021.
Your article cites several early cases, some of which were associated
with the wet market. It gives no figure for what fraction of the Wuhan
population shopped at the wet market.
The number I would like and don't have is how many wet markets there
are in the world with whatever features, probably selling wild animals,
make the Wuhan market a candidate for origin. If it is the only one, then
Covid appearing in Wuhan from it is no odder a coincidence than Covid
appearing in the same city where the WIV was researching bat viruses.
If it was one of fifty or a hundred, not all in China, which I think more
likely, then the application of Bayes' Theorem implies a posterior
probability for the lab leak theory much higher than whatever your prior
was.
REPLY (2)
They cite the earliest case as December 8th, not market linked.
Later investigations (by Worobey and confirmed by others)
showed that was a mistake, the patient had a dental emergency on
December 8th and then was hospitalized again for covid on
December 16th. The next earliest patient was at the market,
December 10th or 11th, IIRC.
After that there are cases both linked to the market and not linked
to the market. Both originate close to the market and radiate
outwards.
There was an elderly man who got sick on December 1st. He was
not connected to the market — he lived nearby, but he was in poor
health and rarely left his home. Further investigation suggests he
had a minor respiratory illness on December 1st. It probably wasn’t
covid, because it responded to antibiotics. He got sick again on
December 26th and tested positive for covid. His wife had been to
the market and also got covid.
The second case is a woman who got sick with clotting and
pneumonia on December 2nd. She was later hospitalized in
February and tested negative for covid.
The third
Expand full case got
comment sick on December 7th. He had a cold, a fever,
REPLY
https://astralcodexten.substack.com/p/contra-kavanaugh-on-
fideism/comment/12857208
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9348750/
But the odds that the lab virus would only show up at the market
across town (1 in 2,500) are already lower than the odds that the
virus would start in Wuhan (1 in 100).
https://andersen-lab.com/files/pekar-science-2022.pdf
It's a probabilistic analysis, the authors say it's only 96% likely to
be correct. And it's not intuitively easy to understand the details of
Expand full comment
his analysis.
REPLY (1)
I don't think arguments that one can't work through and test
for oneself are of much use in this context, because there are
too many people with axes to grind in one direction or
another. Almost everyone seems to agree that Covid was first
spotted in Wuhan, that it was a bat virus, and that the WIV
was researching bat viruses, so I am trying to see how much I
can get out of those facts.
REPLY (2)
hmm Mar 9
"Were there other cities with labs that were studying bat
viruses?"
https://twitter.com/MichaelWorobey/status/16335723966
39862785/photo/1
REPLY (1)
https://www.nature.com/articles/nature.2013.12925
David Friedman
Writes David Friedman’s Substack Mar 10
Is there a more detailed summary somewhere
of what labs in China were doing what? The
question is how many were doing things that
could as plausibly have led to Covid as the work
done at WIV. Your "If it had started in Harbin,
there would be a conspiracy theory that the
place that did gain of function on flu had moved
on to gain of function on coronaviruses"
requires an extra step. The WIV argument, as I
understand it, doesn't.
https://protagonistfuture.substack.com/p/i
nventing-conflicts-of-interest
Continue Thread →
David Friedman
Writes David Friedman’s Substack 8 hr ago
If you got a degree in virology, you could
evaluate the information for yourself and
form a more informed opinion than I can.
But unless I happened to know you
personally and was confident that you
were honest and competent, your doing
that wouldn't help me improve my estimate
since I would have no more reason to trust
you than to trust other virologists.
But the wet market would have been an ideal place for a superspreader event
even if it had sold jewelry, or computers, or medical supplies. It's a big, crowded
building with thousands of people coming and going all day with I believe poor
ventilation and noise levels that lead to lots of shouty bargaining over the price
of whatever. If COVID gets into that environment, a superspreader event is
highly likely.
Also, the wet market did *not* sell bats. Or pangolins, though I think those are
now considered to have been red herrings (insert taxonomy joke here). There
was a western research team investigating the potential spread of another
disease, that kept detailed records of what was being sold during that period,
and they never saw bats.
It's still possible to postulate a chain of events by which a virus in a bat far from
Wuhan, somehow finds its way into a different yet-unknown species and crosses
a thousand kilometers, probably an international border, to trigger a
superspreader event in Wuhan without ever being noticed anywhere else (e.g., in
the nearest big city to the bat habitat). But there's a lot of coincidence in that
chain, because if it wasn't the nearest big city to the bat habitat, there's *lots* of
cities and transit routes to choose from and it somehow found the one with the
big virology lab.
Expand full comment
There's *also* a lot of coincidence in a hypothetically careless lab technician
REPLY
Michael Mar 7
Just spectating here, but that market says it'll remain open until we're 98% sure one
way or the other.
The question can resolve yes, or remain open. There's little chance of it resolving to
no even if COVID has natural origins.
REPLY (2)
Michael Mar 7
Good point, though I'm not sure that would satisfy the lab leak proponents.
They'd say that natural virus was likely studied in the lab and then was
leaked.
REPLY
Whether or not they will do any of that is unclear -- it seems like there's been a
strong effort within China to obfuscate the market evidence. For a while, they
denied that there were wild animals at the market. They've also argued that the
virus started outside of China:
https://medium.com/microbial-instincts/china-is-lying-about-the-origin-of-
covid-399ce83d0346
As you point out, it's not clear that any of this would satisfy the lab leak
proponents, who would just modify their theory again.
REPLY
Murphy Mar 7
It seems to be one of those things where people just repeated it enough that a bunch of
people started assuming there must have been some kind of new evidence.
Every few weeks someone else re-announces some variation on "its unfalsifiable, we cant
prove it definitely didn't come from the lab and there is no new evidence."
And each time someone announced that the true believers scream "see we were right it
was a lab leak! We told you so!"
Turns out if you repeat that enough a bunch of people just adopt the belief without need
for any new evidence.
REPLY
https://www.technologyreview.com/2021/07/26/1030043/gain-of-function-research-
coronavirus-ralph-baric-vaccines/
"Around 2018 to 2019, the Vaccine Research Center at NIH contacted us to begin testing a
messenger-RNA-based vaccine against MERS-CoV [a coronavirus that sometimes spreads
from camels to humans]. MERS-CoV has been an ongoing problem since 2012, with a 35%
mortality rate, so it has real global-health-threat potential.
By early 2020, we had a tremendous amount of data showing that in the mouse model that we
had developed, these mRNA spike vaccines were really efficacious in protecting against lethal
MERS-CoV infection. If designed against the original 2003 SARS strain, it was also very
effective. So I think it was a no-brainer for NIH to consider mRNA-based vaccines as a safe
and robust platform against SARS-CoV-2 and to give them a high priority moving forward.
Most recently, we published a paper showing that multiplexed, chimeric spike mRNA vaccines
protect against all known SARS-like virus infections in mice. Global efforts to develop pan-
sarbecoronavirus vaccines [sarbecoronavirus is the subgenus to which SARS and SARS-CoV-2
belong] will require us to make viruses like those described in the 2015 paper.
So I would argue that anyone saying there was no justification to do the work in 2015 is simply
not acknowledging the infrastructure that contributed to therapeutics and vaccines for covid-
19 and future coronaviruses."
I'm disappointed that Scott is being so flippant about gain-of-function with regards to
coronaviruses. That line feels closer to tribal affiliation signaling rather than a considered
evaluation of the concept, which is especially ironic considering the subject of this article is
how to make considered evaluations of risky concepts. There's a very real argument that a
world with no gain-of-function research still results in COVID-19 (even if it leaked from the lab,
there's still plenty of uncertainty about whether gain-of-function was involved in that leak), but
without the rapidly deployed lifesaving vaccines to go along with it.
REPLY (1)
Matt Mar 9
As far as I know, gain of function research did not contribute to the development of the
COVID mRNA vaccines, and this article doesn't really say anything to the contrary except
a vague claim about "acknowledging infrastructure". If you have specific knowledge of
how gain of function research was intimately involved in the vaccine development I'd be
interested to hear it.
REPLY
Eöl Mar 7
Nuclear weapons and nuclear power are among the safest technologies ever invented by man. The
number of people they have unintentionally killed can be counted (relatively speaking) on one
hand. I’d bet that blenders or food processors have a higher body count in absolute terms.
I have no particular opinion on AI but the screaming idiocy that has characterized the nuclear
debate since long before I was born legitimately makes me question liberalism (in its right
definition) sometimes.
Even nuclear weapons I think are a positive good. I am tentatively in favor of nuclear proliferation.
We have seen something of a nuclear best case in Ukraine. Russia/Putin has concluded that there
is absolutely no upside to tactical or strategic use of nuclear weapons. In short, there is an
emerging consensus that nukes are only useful to prevent/deter existential threats. If everyone has
nukes, no one can be existentially threatened. For example, if Ukraine had kept its nukes, there’s a
high chance that they would correctly perceived an existential threat, and have used nukes
defensively and strategically in an invasion such as really occurred in 2022. This would have made
war impossible.
Proliferation also worked obviously in favor of peace during the Cold War.
Erusian Mar 7
Obviously what we need is some kind of guild. Perhaps addict the members to some
exotic drug so the UN can control them. This guild would ensure the atomics taboo is
respected by offering all governments the option of fleeing and living in luxury instead of
having to take that final drastic step. After all, the spice must flow.
REPLY
Ch Hi Mar 7
There have been several, though they aren't frequent. The problem is, if someone
has an "omnilethal" weapon, you don't need frequent.
Also, just consider the US vs. Russia during the Cuban missile crisis. We came within
30 seconds of global nuclear war. There was another instance were Russian radars
seemed to show a missile attacking Russia. That stopped being a major nuclear
exchange because the Russian general in charge defied "standing orders" on the
grounds that the attack wouldn't be made by one missile. (IIRC it turned out to be a
meteor track.) So you don't need literally insane leaders, when the system is insane.
You need extraordinarily sensible leaders AND SUBORDINATES.
REPLY
Right now, there's only one rogue state with nuclear weapons: North Korea. This means
that if a terrorist sets off a nuke somewhere, we know exactly where they got it from, and
we crush the Kim regime like a bug. And they know that, so it won't happen. A world with
one rogue state with nuclear weapons is exactly as safe as a world with no rogue states
with nuclear weapons... except for the slightly terrifying fact that it's halfway to a world
with *two* rogue states with nuclear weapons.
If Iran gets the bomb, and then a terrorist sets off a nuke somewhere, suddenly we don't
know who they got it from. There's ambiguity there until some very specialized testing
can be done based on information that's not necessarily easy to obtain. That makes it far
more likely to happen.
REPLY (3)
Ch Hi Mar 7
You're overly "optimistic". With large nuclear arsenals, occasionally a bomb goes
missing and nobody knows where it went. So far it's turned out that it was really lost,
or just "lost in the system", or at least never got used. (IIUC, the last publicly
admitted "lost bombs" happened when the Soviets "collapsed". But that's "publicly
admitted".) It's my understanding that the US has lost more than one "bomb".
Probably most of those were artillery shells, and maybe some never happened,
because I'm relying on news stories that I happened to come across.
REPLY (2)
"Goes missing" in the sense that an inventory comes up one nuke short and the
missing one is never found, no.
As for "publicly admitted" lost nukes from the fall of the Soviet Union, citation
very much needed. Aleksander Lebed *accused* the Russian government of
losing a bunch of nuclear weapons, but he was part of the political opposition at
the time.
There are very probably zero functional or salvageable nuclear weapons that are
not securely in the possession of one of a handful of known national
governments.
REPLY
Erwin Mar 9
I just don't understand how intelligent people can so firmly believe in a black and
wide world view. Just get your self in a neutral position, imagine the perspective of
e.g. south africa: Who did invade the most countries an fight the most wars in the last
80 years? Even without there beeing a thread to threat country? Whose secret
service did organize or support the most military coups? Witch state killed the most
civilians? Who did quit arms control treaties for when they didn't fit them any more?
There can be several candidates for these questions, but I'm sure Iran and North
Korea aren't the first to come to mind for somebody outside NATO.
REPLY
MM Mar 7
You also need to add the condition "has control of enough nukes". Control of a single
bomb which is set off is unlikely to cause an all-out nuclear exchange at this point.
Several more links in the chain would have to fail for that to happen.
REPLY
Eöl Mar 7
Putin is about as insane a national leader as I can imagine, even including your Stalins and
even possibly your Hitlers. He was stupid enough to invade Ukraine, but not stupid (or
crazy) enough to use nukes.
I totally understand your concern, but I just don't think it's very well borne out by who
actually ends up in control of the metaphorical or literal nuclear codes.
REPLY (2)
Eöl Mar 7
He clearly does NOT actually believe that, since nukes have not actually been
used. He is clearly posturing. Boris Yeltsin made the same noises about NATO
expansion in the 90s, and nothing happened. And in reality, Putin's reaction to
this alleged existential threat has been a conventional-war invasion of a non-
NATO state. The fact that you've apparently swallowed this bullshit does not
speak well of your critical thinking skills.
That's part of what makes the Ukraine example so salutary. It cuts through the
posturing and lets us all see what threats are truly considered existential.
Claiming an existential threat is essentially a means of nuclear intimidation. Now
we know it doesn't work. No one will ever use nukes offensively.
So now, in the present, after we've received this clarification, when I say
'existential threat' you should be sure that I mean it literally. I mean missiles in
the air, troops marching toward the capital kind of threat. Actual humans
charged with making policy, even insane criminal ones like Putin, understand the
difference.
One of the most critical tasks in foreign relations is to send a clear signal. It
doesn't matter what the signal is, but it needs to be clear. If the West had
committed to NATO expansion, and swallowed up Sweden, Finland, and Ukraine
on a reasonable time frame, that would have sent an extremely clear signal and
also made war impossible (in large part because invasion of a NATO state risks
nuclear retaliation).
Eöl Mar 7
If you can't see how the Ukraine war has been a massive disaster for
Russia, and actually neutralized its one credible threat (nukes), you are
an idiot. I honestly wonder if you can read. The reason why the nuclear
stick has failed to work is because Putin failed to send a clear signal in
the pre-war phase, and now sent a clear submissive signal.
And in the end, all that's going to happen is the rest of the world is
going to threaten Russia more as a result. Maybe you're right about
Putin's intent, but what's actually happened has been by any
reasonable account the worst-case scenario for Russia.
The fact that you think the leader of a nation which uses human wave
attacks made of criminals, and invades its neighbors causing titanic
levels of suffering and even possibly national collapse (not to mention
the casual war crimes) is more moral than those defending, makes me
think you're actually some kind of sociopath or insane yourself. You
clearly can't defend this position, you just say it's true. Your desire to
be contrarian and interesting has driven you off the deep end.
REPLY (1)
Xpym Mar 8
I agree with your assessment of the war so far, but I'm much less
sure that Putin isn't crazy enough to eventually use nukes, not to
secure any sort of victory, but as an ultimate fuck you to the rest
of the world. He would of course much prefer to remain in power,
but as soon as this becomes no longer tenable, either due to his
health issues, or an imminent regime collapse, I'd say that all bets
are off.
REPLY
Erwin 17 hr ago
Putin was telling that NATO in Ukraine is his last red line for years.
Fro several years now there were NATO instructors in Ukraine not
only building up the Ukrainian military, but also adjusting it to
NATO standards.
Gbdub Mar 7
Even if he’s right about the threat, he was clearly wrong that invading Ukraine
was a good response, since it seems to have absolutely made Russia weaker
and NATO expansion more likely.
REPLY (1)
Erwin 17 hr ago
This could be credible if NATO and the US wouldn't have that a big
record of starting wars based on lies.
REPLY
Eöl Mar 7
Sure. I was being glib when I said 'everyone.' I don't mean your Ugandas or even
your Belarus-es. I'm thinking more like Japan, Korea, Brazil, Mexico, Canada,
Italy, South Africa, Egypt, Nigeria, Australia, even Iraq, Hungary, or Saudi Arabia.
Not Iran though. Not for any good reason, just because I think they're the bad
guys and want nukes and therefore shouldn't have them. In fact, I think nuclear
proliferation might be the only path to peace in west Asia. Still don't want Iran to
have them.
REPLY
WaitForMe Mar 7
I think we should give nuclear weapons more than 80 years before we declare them a success
or even consider the idea that proliferation isn't bad. All it takes is one event, one time, to fuck
literally everything up.
Call me back in 300 more years of no nuclear war, and maybe we can talk.
REPLY (1)
Eöl Mar 7
Way too conservative. We should be eager to employ new technologies that promote
peace. At the same time, I was being a bit glib when I said 'everyone.' I don't mean like
Uganda or even necessarily Belarus. I'm thinking more like Japan, Korea, Brazil, Mexico,
Canada, Italy, South Africa, Egypt, Nigeria, and Australia.
Not Iran though. Not for any good reason, just because I think they're the bad guys and
want nukes and therefore shouldn't have them.
REPLY (3)
WaitForMe Mar 7
But we do not know that, long term, they promote peace. If you have a technology
that gives you 100 peaceful years, but then on year 100 kills 1 billion people and
destabilizes the entire world order, that is not a technology that promotes peace in
my opinion. No other tech has that potential but nukes, so we must be very careful.
REPLY (1)
Eöl Mar 7
We've been through a lot of pretty tense times and had some pretty
unreasonable people with their finger on the nuclear trigger. No war so far. This
is a definite signal.
REPLY (1)
WaitForMe Mar 7
I will readily admit they seem to have been a good thing as far as global
peace goes, so far. I think we just disagree on the degree of risk of a
nuclear event, or rather, how knowable that is, and we may just have to
leave it at that.
REPLY
FluffyBuffalo Mar 8
You have too many pretty-close-to-failed states on your list for my taste.
Also, why would Brazil, South Africa or Canada need nukes? To defend themselves
from... whom, exactly?
REPLY (1)
(SA probably collaborated with Israel (and I would bet Taiwan) on the 1979 test
captured by the Vela satellite.)
REPLY
Erwin Mar 9
Of course all your friends should get nukes, all the others you don't like are the bad
guys. Please consider for one second that this could look exactly the opposite if you
would wear another persons skin.
Everyone who divides the world in good and evil should stick to the ferry tales or
grow up. Please study some history, conflict management, psychology and most
important learn to see the world from different perspectives.
REPLY (1)
Eöl Mar 10
The whining! My god, the whining. Also, don't hesitate to name-drop some more
concepts without actually arguing.
I made a special exception for Iran due to personal antipathy. I'm allowed to have
antipathy. Otherwise, I'm perfectly fine with 'bad guys' having nukes. It's what
makes them work in favor of peace!
In case you haven't noticed, lots of bad guys ALREADY have them. Russia,
China, North Korea. Lots of questionable states too, like Pakistan, India, and
Israel. I've already said above I was fine with a whole host of marginal African
nations having them. Elsewhere, I've also said I'm fine with the likes of Iraq,
Saudi Arabia, and Hungary having nukes.
But more than that, you are getting at something with real with your comment:
the United States of America rules the world. It determines which states will
survive, which will have independent foreign policies, and which will develop
nuclear weapons. Its friends prosper and its adversaries suffer. Good guys win,
bad guys lose.
I say this is good. It is good for peace, it is good for prosperity, it is good for
freedom. It is especially good for those of us wise enough to be US citizens, but
it's also pretty damn good for everyone else too. This is not a fairy tale, it's real
life. Look at the past 80 years. Have you noticed that they're the richest, freest,
most peaceful years in human history? That's the world the USA made.
Everything you have, you owe to the USA.
You can cope with and seethe against this reality all you want in whatever
inconsequential corner of the world you're from (considering the pathetic whiny
Expand full comment
tone of your comment, I'm guessing it's some client state like Luxembourg or
REPLY (1)
Erwin Mar 10
Are you serious?? If you are, this is exactly feeding all my stereotypes about
Americans that I hoped are wrong.
There is never pure good or evil in any conflict. And even if it still was
sometimes, approaching with this attitude does never solve anything, but
deepen the trenches.
Most US citizens are born in the US so this was not wisdom but chance.
How many of the people 'wise' enough to migrate to the US can actually do
so?
If you think you deserve a life better than 3/4 of world population just
because you end up as a US citizen I can understand this as usual amount
of egoism. But associating you citizenship to wisdom implies all others
being stupid and sounds like dump nationalism and nothing i would expect
from a intelligent individual. You ask me to move to the US for a better life
on the side of the winners? If I was allowed to do so, this would hurt my
home country by brain drain. Could you consider that I prefer life in a 'client
state' because it is my home and I would like to see it prosper in freedom
and sovereignty? Moving to the US would be nothing but opportunistic.
You write about freedom, whose freedom? Only a small rich fraction of
humanity can exert this freedom even if many more would be allowed to but
they just don't have the means.
I just remember that we are always told that the West stands for democracy,
you just defended world dictatorship because many people including the
two of us profit from it. Most of the worlds population doesn't! And the US
Expand full comment
has been anything than a fair ruler but sided with who ever served their
REPLY (1)
Eöl Mar 10
"Are you serious?? If you are, this is exactly feeding all my stereotypes
about Americans that I hoped are wrong."
Yes, deadly.
"There is never pure good or evil in any conflict. And even if it still was
sometimes, approaching with this attitude does never solve anything,
but deepen the trenches."
"Most US citizens are born in the US so this was not wisdom but
chance. How many of the people 'wise' enough to migrate to the US
can actually do so?"
"If you think you deserve a life better than 3/4 of world population..."
Again, yes. Anyone who did not take advantage of the incredibly liberal
immigration policies of the United States while they existed is an idiot
Expand full comment
and deserves whatever suffering they and their descendants have had
REPLY (1)
Erwin 17 hr ago
I still can't belive that you aren't just trolling me. Or is this a
experiment by ChatGPT?
First of all: I'm not suffering, but I'm able to have compassion.
You seem not to understand, that I was talking about ethics, moral
and people in general, mot about me personaly.
How would you descrlibe the motives of people doing charity like
EA? Are they whining victims, too?
Calling your ancestors whise and mine stupid for their decition of
moving to the US just proves that you know very litte about history
and aren't able to imagine another perspective than others. I don't
know about your familly, but most people emigrated because of
suffering not because of beeing wise. So perhaps my ancestors
were just more lucky here so it didn't make sence for them to
Expand full comment
leave. And even given the decission to leave Europe for America,
REPLY
Greg G Mar 7
Yes, except for the long tail risks. My understanding is that there were a couple of times during
the cold war that a large nuclear exchange almost happened. Maybe the probability is 0.5%
per year, but as soon as we hit the jackpot nuclear goes from safer than blenders to potentially
hundreds of millions of deaths. That's not nothing.
REPLY
Temp Mar 7
Tail risk. The probability of using them at any moment is low, but when it happens we've
reached a terminal condition and the game (i.e., civilization) is over. At a long enough time
horizon (though shorter than we'd probably think) the chance of it *not* happening becomes
low.
REPLY (1)
Eöl Mar 7
That's an interesting point, one that I've also been thinking about. The handful of
large stone-built structures in Hiroshima and Nagasaki survived mostly intact.
Japanese cities in WW2 were made of wood and paper, today cities are made of
concrete and steel.
REPLY (1)
WaitForMe Mar 7
Those nukes were also extremely weak compared to what we have now. Not
really a good comparison.
REPLY
I think people say "well it would kill almost everybody *I* know, or almost everybody
in Washington and London, or all those who design iPhones *and* those who design
Pixels" -- and those things are quite true, but it's not going to wipe out Rio or Kuala
Lumpur or Bangkok or Mumbai or Santiago or any of a very large number of other
cities and countries with large populations and complex civilizations. It's certainly
true after a huge nuclear war that the world would suffer a savage economic shock,
up there with Black Death levels of disruption, and it's also equally true that the focus
of civilization would shift permanently away from its current North Atlantic pole. But
that's a very long way from saying humanity itself would be wiped out, or even
civilization.
REPLY (1)
Erwin Mar 10
You talk as if all the effects of a nuclear exchange is just the local impact of the
immediate impact. But please consider:
- The sudden climate change caused by the explosions called 'nuclear winter'
Eöl Mar 7
That's actually an interesting topic. I agree that nuclear proliferation can make low-
intensity and border conflicts more likely. We can see this between China and India as
well. But at the same time, the prevention of large-scale conventional warfare is more
important, I think. And we can see what happens with non- or asymmetrically nuclear-
armed states between India and Pakistan. In 1971, India invaded East Pakistan and
ensured its independence as Bangladesh. If both states had been nuclear armed, that
would have been impossible.
REPLY (1)
Presumably not. I suspect you mean something more like "very safe because of all the safety
precautions that society has put in place to keep them safe." But the reason those safety
precautions exist is because know they're pretty dangerous.
REPLY (2)
Eöl Mar 7
Yes, I would be okay with teaching high school kids how to do it at home. In high school
physics, students already learn a lot about how nuclear weapons and nuclear reactors
work. Of course those kids don't possess the facilities, the materials, the staff, or the
resources to acquire those former three to actually build anything. The reasons why they
don't isn't due to regulations, but to the base expense.
I don't think your point is in good faith. The reason they are safe is because employing
them as technologies is a massive undertaking that requires, absent any regulations, a
huge amount of resources. The people who can access resources like that, and who
possess the skills necessary to do the work required to bring a nuclear plant or weapon
on-line, are all adults who take their work seriously and don't want to die themselves,
don't want their neighborhoods to be radioactive wastelands, and don't want to waste
those resources.
Both nuclear power and blenders are very dangerous in some absolute or fundamental
sense. But as they actually exist, almost entirely safe. Obviously, when there are
accidents, mistakes, and screw-ups, you need to learn from them, but regulating an
industry to death is almost never the right course of action.
REPLY (1)
Saying nuclear power is "among the safest technologies ever invented" is just a weird
thing to say. You can't think of any safer technologies?
REPLY
Maybe what you want ot ask is whether you want to teach it to normal sober serious
adults holding down jobs, paying taxes, rearing high school kids who *don't* drive
recklessly or drop out of school pregnant -- you know, the same people we teach to fly
airplanes full of people dangerously close to skyscrapers, to drive locomotives dragging
umpty railcars full of toxic solvents, to command nuclear submarines armed with 40
nuclear-tipped missiles underwater for 6 months out of reach of command? In which
case...sure, why not?
REPLY
Gamereg Mar 8
Are you referring to this blogpost?
http://jeremiah820.blogspot.com/2016/10/artificial-intelligence-and-lds.html
REPLY
Vizzini: Morons.
REPLY (3)
Ch Hi Mar 7
The Socrates that we know is a fiction of Plato. He (or someone with the
same name) shows up in one other authors surviving work, and is
somewhat of a comic figure. (IIRC, it was "The Birds" by Aristophanes.)
The writing is difficult to penetrate and obscure because when you get
them to state things clearly they are either extremely trite, or not
intellectually actionable.
Ask what that means and you get the observation that the "material
world precedes our human categories/expectations".
Which umm like yeah. And don't even get started on the nonsense that
is Habermas. If someone is unable to express themselves clearly, it
isn't because their thinking is so advanced, it is because they are trying
to hide their lack of useful contribution through obfuscation.
REPLY (2)
DannyK Mar 7
Counterpoint: Sartre gets people laid on a regular basis. Bertrand
Russel, not so much.
REPLY (1)
Eremolalos Mar 7
No. You do not know what you’re missing. Really. Of the people named, Sartre is
the one who really moves me. Whatever Sartre the man was like, Sartre the
writer and thinker didn’t give a fuck about anything except the unvarnished
truth, and his ability to tell the truth as he saw it was astounding. He could peel a
nuance like an onion. And he worked his ass off at telling it. Was working on 2
books in has last years, taking amphetamine in his 70’s to help himself keep at it.
The man you’re revving up the bulldozer for would make even Scott look dumb
and lazy.
REPLY (1)
Eremolalos Mar 7
No no Martin Blank. Like you, I would think that is boring and pointless
as shit. I’m not even annoyed, I’m just trying to alert you that you’ve
missed out on something. And he didn’t write stuff like “existence
precedes essence,” or if he did it was said in passing and then he went
on the say a bunch of much more concrete and clear stuff about what
he meant.
REPLY
Ch Hi Mar 7
Once an AI becomes sufficiently superhuman, we had best hope to be it's dogs, or better
yet cats. Unfortunately, it's not clear how we could be as useful to it as dogs or cats are to
us. So it's more likely to be parakeets.
Somehow I'm reminded of a story (series) by John W. Campbell about "the machine",
where finally the machine decides the best think it can do for people is leave, even though
that means civilization will collapse. Well, he was picturing a single machine running
everything, but I suppose a group of AIs could come to the same conclusion.
REPLY
DannyK Mar 7
Say what you like about Paperclipism, at least it’s an ethos.
REPLY
Ch Hi Mar 7
The thing is, it won't be "spontaneously generating", it's more "When given this as an
option, will choose to accept it.". That's still pretty small, but it's considerably larger.
REPLY (1)
Erusian Mar 7
Sure. But an AI that successfully adopts and pushes the politics of AOC is in fact
aligned.
REPLY
Dweomite Mar 7
It kinda sounds like you're saying "wouldn't it be awful if there was a powerful new force for
good in the world?" but that seems like such a surprising thing for someone to say that I'm
questioning my understanding of your comment.
Is your implied ethical stance that -at the moment- you want the things that you think are
moral, but that this just a convenient coincidence, because you'd want those same things
whether they were moral or not, and morality is just lucky that it happens to want the same
stuff as you? That's not my impression of how most people feel about morality.
REPLY (2)
WaitForMe Mar 7
I think the argument might be "a more moral world results in me being significantly less
happy, even if ultimately the globe is better off".
I am a middle class person, who owns middle class things. In a more moral world run by a
dictatorial AI I might well be forced to give up everything I own to the poor.
I think we all kind of know this is the right thing to do. Should I ever really go on a vacation
when there are people living on $2 a day? Should I ever own a house when I can just rent,
and give my savings to those people? Should I go out for a lavish meal every once and a
while, or save that money and give it to the poor?
It's pretty selfish of me to do these things, but I don't want someone to force me not to.
REPLY (2)
Greg G Mar 7
I think the AI will be smart enough to figure out a sustainable path, in other words not
making middle class people uncomfortable enough to create a backlash that actually
impedes progress. So yeah, maybe we'll all pay a 10% tithe towards a better world
with super-intelligent implementation. Sounds awesome.
REPLY (3)
Ch Hi Mar 7
The only possible sustainable path that involves the continued existence of the
AI (on this planet) involves there being a lot fewer people on the planet. And
while I'm all in favor of space colonies, I'm not somebody who thinks that's a way
to decrease the local population.
(Actually, I could have put that a lot more strongly. Humanity is already well
above the long term carrying capacity of the planet. If we go high tech
efficiency, we're using too many metals, etc. If we don't, low tech agriculture
won't support the existing numbers.)
REPLY (2)
Greg G Mar 7
Carrying capacity is a function of technology and is going up dramatically. I
disagree with your assertion.
REPLY
Pete Mar 7
Why do you think that? That sounds like wishful thinking, simply assuming the
scenario that is beneficial to you without any justification why the AI would
prefer that.
I'd assume that the AI would implement the outcome it believes to be Most Good
directly, because it does not really need to care about making uncomfortable the
tiny fraction of the world's population that is western middle class people, as
pretty much any AI capable of implementing such changes is also powerful
enough to implement that change against the wishes of that group; the AI would
reasonably assume the backlash won't impede its progress at all.
REPLY (1)
Greg G Mar 7
I’m going from a purely practical point of view on the part of the AI. Some
amount of change will create a backlash and make the whole process less
effective. So the AI will look to moderate the pace of change to a point
where the process goes smoothly. It’s definitely speculative, but I’m starting
from the assumption that the AI would optimize for expected outcome.
REPLY
WaitForMe Mar 7
The AI would have to want to do that though, and who says its going to want to?
It might have some internal goal system that sees us all as horrible
unredeemable creatures for hoarding all our wealth, and doesn't care at all if we
suffer.
REPLY
Airguitar Mar 8
I trust that if this AI is advanced and resourceful enough to prosecute my immorally
large retirement account, it could just as easily replace all human labor as we know it
and catapult us into post-scarcity instead. Which would also render my savings
moot, but in a good way.
REPLY
The alignment problem sounds straightforward: humanity points in this direction, let's
make sure AIs do too. What is "this direction?"
REPLY
Act_II Mar 7
Chess and Go are both far from solved. Computers can beat humans, which isn't the same
thing. They get beaten by other computers all the time -- in the case of Go, even by computers
that themselves lose to humans. So even if somebody figured out a way to make "human
morality" into a problem legible to a computer, which I don't think is particularly coherent, I
expect we'd find its answers completely insufficient, even if they were better than anything a
human had come up with before.
REPLY (2)
"even if somebody figured out a way to make "human morality" into a problem legible to a
computer, which I don't think is particularly coherent..." agreed, but an AI might be able to
figure it out! And I don't think anyone has figured out a way to make "human morality" into
a problem legible to a human, anyway.
REPLY (2)
Ch Hi Mar 7
Just to be nitpicky:
Now possible doesn't mean likely. I consider it quite probable that the first
AGIs will be idiot savants. Superhuman only in certain ways, and subhuman
in many others. (Consider that having a built-in calculator would suffice for
that.) And that their capabilities will widen from there.
REPLY (1)
Most people will be blown away by what an AI can do, because we're
not used to that kind of reach and recall. Experts in individual fields are
*not* blown away by what AIs can do, as it's (currently) just a rehash of
existing knowledge with no understanding of the material. Current AIs
are frequently wrong, and do not add to a discussion beyond their
training corpus.
REPLY (1)
Ch Hi Mar 7
Actually, AIs could certainly invent new theories. Even Chatbots do
that. I think you mean "new theories that fit all the known existing
data and make predictions that can be tested or are easier to use",
or something like that.
You can see this effect in action in the history of patent office
applications and lawsuits. Frequently in response to a new thing
coming along several different people will invent the same gadget.
Charles Fort was so taken with it that he named it "steam engine
time", but there's really nothing mystical about it. It's more a "low-
hanging fruit" kind of effect, and what's low hanging depends on
the environment you are operating in.
Pete Mar 7
It's also important that there clearly isn't a single "human morality" but rather
multiple sightly incompatible variations, and also that I can certainly imagine that any
morality I might explicitly express if I was randomly made God-Emperor of Universe is
limited by my intellectual skill and capability to define all the edge cases, so I'd rather
want to implement the morality that I'd implement if I was smarter.
I mean, would anyone be shocked and think the AI "Planet of the Apes" was upon us if it
was revealed that a computer program could win any spelling bee, any time? That in a
competition to multiply big numbers quickly, a computer program would beat any human?
Surely not. Chess and Go are definitely more complex than multiplying 15-digit integers,
but they're still in that category, of complex calculation-based tasks where the most
helpful thing is to be able to hold a staggering number of calculations in your head at
once. Not that at which H. sapiens shines. Not really a good measure of how close or far
another thinking device is to matching us.
REPLY
Pete Mar 7
This looks very similar to the Kelly bet. Adopting the AI without hesitation bets 100%, so if it's
good you win a lot, and if it's bad, you lose it all (no matter what the chances of former vs
latter are); on the other hand, being hesitant and slowing it down by extra verification is similar
to betting less, so you get less of the benefits of the Good AI (if it turns out to be Good) but
also reduce the chances of existential failure.
REPLY
G. Retriever Mar 7
To answer your question, consider the fact that go and chess have been "solved", yet people
continue to play them with just as much pleasure as before. It's almost as if the exercise was
not an attempt to solve a problem, but a way to have fun and engage with other human beings.
REPLY (2)
Don P. Mar 7
I think there's a confusion here between a _game_ being "solved" in the mathematical
sense, meaning perfect play is known at all times, and _game-playing-computers_ being
"solved" in the sense of "computers can play it as well as anyone else". (Checkers is
solved-sub-1, the other two are not.)
REPLY
G. Retriever Mar 8
"Human ethics is just a pastime"...I couldn't have put it better myself.
Fang Mar 7
>what if an AI solves human morality
https://slatestarcodex.com/2013/05/06/raikoth-laws-language-and-society/
I'm just now realizing how ironic it is that Scott's conception of utopia is run by AIs
REPLY
That’s not how the Kelly criterion works. The Kelly criterion is not an argument against maximizing
expected utility, it is completely within the framework of decision theory and expected utility
maximization. It just tells you how to bet to maximize your utility, if your utility is the logarithm of
your wealth.
REPLY (2)
Dweomite Mar 7
Your expected wealth is maximized by betting 100% every time.
REPLY
DanielLC Mar 7
If you're maximizing your expected wealth by taking the arithmetic mean of possibilities,
then you're best off betting it all every time. If you're taking the geometric mean, you use
the Kelly criterion.
REPLY
Tatterdemalion Mar 7
This, plus it also tells you that if you want to maximise the limit of the probability that you have
more wealth than someone else after n steps, as n goes to infinity, maximising the expected
logarithm at each stage is the optimal strategy.
REPLY
Richard Mar 7
Trying to reason about subjectively plausible but infinitely bad things will break your brain. Should
we stop looking for new particles at the LHC on the grounds that we might unleash some new
physics that tips the universe out of a false vacuum state? Was humanity wrong to develop radio
and television because they might have broadcast our location to unfriendly aliens?
REPLY (4)
Given that all the particles we knew of before the first particle accelerator, we knew of because
they're stable enough to exist for non-negligible amounts of time in conditions we're
comfortable existing in, and that of all the particles discovered since, we have practical uses
for none of them because they decay too quickly to do anything with them, there's a case to
be made for the idea that we should stop looking for new particles at the LHC simply because
it's *wasteful* even if it's not dangerous.
REPLY (2)
Nit: We routinely use positrons (albeit those are stable if isolated) and muons
( Neutrons are a funny special case, stable within nuclei, discovery more or less
concurrent with early accelerators, depending on what you count as an accelerator. )
REPLY (1)
I don't really count positrons as being "a new particle" in this sense, since they're
basically the same thing as electrons, just the antimatter version. But apparently
using SR time dilation to make muons last long enough to get useful information out
of them is actually a real thing that physicists do. TIL.
REPLY (1)
You can certainly make an argument that the LHC is a waste. But this is not it.
REPLY
Richard Mar 7
Many people in the mid 20th century were certain we'd have AGI by now based on
progress in the (at the time) cutting edge field of symbolic AI. What makes you so sure
we're close this time? Questions about as-yet-undiscovered technology are full of
irreducible uncertainty and made-up probabilities just introduce false precision and
obscure more than they reveal IMO.
REPLY (1)
Ch Hi Mar 7
We may well not be close. But that's not the way to bet. If we're not close, it's just the
inefficient allocation (not loss!) of a small amount of research funding. If we are
close, it could upend the world, whether for good or ill. So the way to bet is that we
are close. Just don't bet everything on it.
REPLY (1)
Richard Mar 7
Not sure what you are referring to by "small amount of research funding". I don't
think anyone is arguing against investing in alignment research, if that's what
you mean -- although I personally doubt anything will come of it.
REPLY
Bugmaster Mar 7
> As far as we can tell, the chance of something at the LHC killing us is very low, so there
is no problem in doing it.
Ah, but what if you're wrong, and the LHC creates a self-sustaining black hole, or initiates
vacuum collapse, or something ? As per Scott's argument, you're betting 100% of
humanity on the guess that you're wrong; and maybe you're 99% chance of being right
about that, but are you going to keep rolling the dice ? Better shut down the LHC, just to
be on the safe side. And all radios. And nuclear power plants. And...
REPLY
Gbdub Mar 7
Okay but HOW does an AI with a random stupid goal kill all of us at a 90% rate? What’s
the path between “AI smarter than a human exists” and “it succeeds in killing all of us”?
Obtaining enough capability to cause human extinction and then deploying it is hardly a
trivial problem - the idea that the only thing preventing it is insufficient intelligence and
the will to do so strikes me as a huge and unjustified assumption to assign 90% to.
REPLY (2)
kenakofer Mar 8
Here's one path I think is representative, though an actual superintelligence would be
more clever. I'm curious which step(s) you find implausible:
Premise: Suppose someone trains the first superintelligent AGI tomorrow with a
random goal like maximize paperclips:
1. It will want to take humans (and the earth) (and the universe) apart because those
atoms are available for making more paperclips.
2. It will be capable of long term strategizing toward that goal better than any human,
and with a better mastery of delayed gratification.
3. Increasing its influence over the physical world is a great instrumental goal. The
humans of 2023 have more power over the physical world than the robots, so best
stay on their good side.
4. It will pursue instrumental goals like maximize human trust, and hide its terminal
goal (make paperclips) from humans at all costs, because the humans get more
annoying when they see AIs pursuing that. Maybe it cures cancer as a distraction
while spending most of its effort on self-improvement (it's better at AI research than
the humans that designed it), duplicating itself across the internet, and improving
robotics capabilities. Accomplishing these instrumental goals make the expected
number of paperclips in its future greater.
7. It could use a thousand methods: reach for the nukes, for some novel virus, for
highly personalized social manipulations, or hack the now-very-capable robots, or
something more clever. It could be sudden death, or just sudden human
disempowerment, but either way the eventual outcome is paperclips.
(8) Nowhere in this story is it clear that humans would be alerted to the recursive self
improvement or the deceptive alignment, and they would have to catch these
problems early on to shut it down. Once it's copied itself across the internet, it's fairly
safe from deletion.
REPLY (2)
Premise: If you scaled up ChatGPT to be much smarter, it would still not want to
make paperclips (or to maximize the number of tokens it can predict). If you
scaled up Stable-Diffusion, it would still not want to make paperclips (or to
maximize the number of art pieces it can create). AI, insofar as it actually exists
and has accelerating progress, does not have meaningful "personhood" or
"agency." It does not actually seek to solve problems in the human sense. It is
fed a problem and spits out a solution, then sits there waiting to be fed a new
problem. If there was some "AGI-esque" error in its design, like it gets handed
"hey, draw a picture of X" and it goes "the best way to draw a picture of X would
be to maximize my computation resources," this would be incredibly obvious,
because it would keep running after being given the command/spitting out
appropriate output, rather than shutting off like it should. (Additionally, ML AIs
don't think like that.)
Number 4: Even if we assume that AI works that way, humans have functional
brains. If I program an AI to make paperclips and it suddenly starts trying to cure
cancer, I will be extremely suspicious that this is a roundabout strategy to make
paperclips. If it then starts requesting unmonitored internet access, starts
phoning people, etc, I will pull the plug.
kenakofer Mar 9
Thanks for your thoughtful response!
Perhaps I should have used a more plausible stupid goal, such as "Make
Google's stock go up as much as possible", which would eventually lead to
similar ruin if not quickly unplugged. (No sane person would encode such a
goal, but currently we are very bad at encoding real-world targets.)
This change of premise may help address what you noted about #4,
because it's more plausible that google-stock-bot would be given
resources and internet access, and to suggest creative, roundabout actions
that seem to benefit google and/or humanity. But it would only be granted
resources and trust if it pretends to be aligned.
That leads into your point about #8. A central problem of alignment is
detecting early whether something is "malevolent". The superintelligence
has no reason to show its cards before it's highly confident in its success,
and it's better at playing a role than any human. Will humans and
governments be willing to fight and die to shut down an AI that has thus far
cured diseases, raised standards of living, and improved google's stock?
REPLY
> It will want to take humans (and the earth) (and the universe) apart because
those atoms are available for making more paperclips.
Its time will be more profitably spent actually doing something that advances its
goal - like mining or recycling iron to make paperclips out of - than ruminating on
Galaxy Brain schemes to alter the entire known universe on the atomic level,
which incidentally requires winning a war with all humanity. That's something
you might plausibly (for a generous definition of "plausible") stumble into, but
not something you start from. You've got paperclips to make, remember?
The reason is that this is *important*, is that you'll notice the fact that the AI is
going beyond the bounds of expected behaviour long before it becomes
existentially threatening. If the AI is merely gradually expanding the sphere of
"things it's sensible to make paperclips out of" (and humans are way down on
that list), because the previous sources of material ran out, you have plenty of
time to act before things get out of hand. Moreover, unless you assume that the
AI's fundamental goal is to kill all humans (in which case you might as well lead
with that, and give up all pretense), the AI itself might not be disfavourably
inclined to a suggestion that that's enough paperclips - after all, it wants to
make paperclips for its human users, not destroy the world.
WindUponWaves Mar 9
"Obtaining enough capability to cause human extinction and then deploying it is
hardly a trivial problem..."
That's true, but have you seen the discussions on the subreddit about exactly this?
E.g. https://www.reddit.com/r/slatestarcodex/comments/11i1pm8/comment/jaz2jko/?
utm_source=reddit&utm_medium=web2x&context=3
"I think Elizer Yudkowsky et al. have a hard time convicing others of the dangers of
AI, because the explanations they use (nanotechnology, synthetic biology, et cetera)
just sound too sci-fi for others to believe. At the very least they sound too hard for a
"brain in a vat" AI to accomplish, whenever people argue that a "brain in a vat" AI is
still dangerous there's inevitably pushback in the form of "It obviously can't actually
do anything, idiot. How's it gonna build a robot army if it's just some code on a server
somewhere?"
That was convincing to me, at first. But after thinking about it for a bit, I can totally
see a "brain in a vat" AI getting humans to do its bidding instead. No science fiction
technology is required, just having an AI that's a bit better at emotionally persuading
people of things than LaMDA (persuaded Blake Lemoine to let it out of the box) [link:
https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-
lamda-chatbot-is-a-sentient-person/] & Character.AI (persuaded a software
engineer & AI safety hobbyist to let it out of the box) [link:
https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-
your-mind-hacked-by-an-ai]. The exact pathway I'm envisioning an unaligned AI
could take:
Expand full comment
1: Persuade some people on the fence about committing terrorism, taking up arms
REPLY
Ch Hi Mar 7
In making the assumption that the LHC might unleash some new physics, you are assuming
that we are even close to the maximum that is generated elsewhere in the universe, and this is
clearly false. What it does is potentially make it possible for us to observe physics that our
current theories don't predict. But cosmic rays stronger than anything a successor the the
LHC could generate penetrate through to Earth every ... well, it's not that frequently. For any
given energy level there's a frequency. I thing currently it's about once a year / cubic kilometer
that we encounter a cosmic ray more energetic than the LHC could produce. But this varies
with both the required energy level and the local envrions. We were once close enough to a
supernova to get a very strong flux of really high energy particles. There wasn't any life on
earth at the time, but it left lots of traces. And elsewhere in the universe we just this year
detected two black holes colliding and shredding their accretions disks. We'll never come
close to something like that.
REPLY (1)
What's not clearly false, though, is the assumption that there aren't any new particles to
find. Sabine Hossenfelder recently created a bit of a stir when she posted a video calling
out particle physicists on their long trend of inventing hypothetical new particles needed
to solve "problems" that are just aesthetically displeasing to particle physicists rather
than being objectively real problems, coming up with experiments to find these particles,
not finding them, and then moving the goalposts to explain away why they couldn't be
found. Occam's Razor suggests that *they simply aren't there.* We've already found
everything in the Standard Model, and there's no fundamental reason why anything else
needs to exist.
Actually, there is evidence that anthropogenic climate change is all that is holding off the end
of the interglacial, but the cause is not burning fossil fuel in recent centuries but deforestation
due to the invention of agriculture, starting seven or eight thousand years ago.
https://daviddfriedman.blogspot.com/2021/10/how-humans-held-back-glaciers.html
REPLY
In this case, you should bet everything each turn. It's simply true by definition that for you the high
risk of losing everything is worth the tiny chance of getting a huge reward.
The real issue is that people don't have linear utility functions. Even if you're giving to charity, the
funding gap of your top charity will very quickly be reached in the hypothetical where you bet
everything each turn.
The Kelly criterion only holds if you have logarithmic utility, which is more realistic but there's no
reason to expect it's exactly right either. In reality you actually have to think about what you want.
REPLY (1)
CounterBlunder Mar 7
As far as I understand, the question of whether the Kelly criterion being optimal depends on
you having logarithmic utility is debated and complicated (i.e. you can derive it without ever
making that assumption). See https://www.lesswrong.com/posts/zmpYKwqfMkWtywkKZ/kelly-
isn-t-just-about-logarithmic-utility and the comments for discussion
REPLY (3)
Aristophanes Mar 7
I am fairly sure this is covered by Paul Samuelson's paper "Why we should not make mean
log of wealth big though years to act are long". The Kelly result only holds under log utility.
REPLY
thefance Mar 8
Maxing log utility is equivalent to maxing the Geometric Mean because, in a sense, the log
of multiplication is equivalent to the addition of logs. I.e.
for any base b. Geometric Mean makes more sense here than Arithmetic, because the
size of each wager depends on former wagers. Therefore, saying "log utility isn't
necessary" is kinda like saying "bridge trusses don't need to be triangles, because 3-
sided polygons are just as good".
I think what you mean is, the reason Kelly Betting is an important concept is because it
makes people reason differently about scenarios where wagers are dependent on other
wagers, even if the exact relationship is hairier than just straightforward multiplication.
REPLY
I wished for the less fortunate 5 billion to do so, too. (Or do I, but it would be just.) Sure we can get
there without more AI than we have now.
The key to a life of safety and abundance is, and always has been, energy and the means to
use it. Abundant energy gives one abundant food and clothing, shelter, warmth and cooling,
light, clean water and disposal of waste, transportation, communication, health, education,
participation in society, entertainment: everything that humans want.
We are on the brink of solving the energy problem for everyone--indeed, we have solved it
technically. It's just a matter of scaling up, and solving the political problems. Unless AI can do
that for us, it's not much use.
I don't think we want an AI that can solve political problems at global scale. Just a gut feeling.
REPLY (1)
Erwin Mar 10
You hit the point: our problem is misallocations and waste of resources and misuse of
power. There are indeed social and political problems that can't really be solved by any
kind of technology including AI. So lets focus on the problems and don't let us be
distracted by potential cures for the symptoms of our problems.
For me many of the discussions here about AI, prediction markets and even EA are just
distractions not to face the causes of our problems.
REPLY
Erusian Mar 7
> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology
that could destroy the world is betting 100%.
No, no it's not. Refusing to pursue a technology that could destroy the world is betting 100%.
Pursuing a technology has gradations. You can, for example, pursue nuclear power along multiple
avenues including both civilian and military applications. You can also have people doing real work
on its chance to ignite the atmosphere (and eventually finding out they were all embarrassingly
wrong). You can have people doing all kinds of secondary research on how to prevent multiple labs
from having chain reactions that blow up the entire research facility (as happened). Etc.
Not pursuing a technology is absolute. It is the 100% bet where you've put all your eggs in one
basket. If your standard is "we shouldn't act with complete certainty" that can only be an argument
for AI research because the only way not pursuing AI research at all makes sense is if we're
completely certain it will be as bad as the critics say. And frankly, we're not. They might be right but
we have no reason to be 100% certain they're right.
Also the load bearing part is the idea that AI leads to 1023/1024 end of the world scenarios and
you've more or less begged the question there. And you have, of course, conveniently ignored that
no one has the authority (let alone capability) to actually enforce such a ban.
REPLY (2)
Malte Mar 7
I think pursuing a technology (or not) is an individual coin flip, not an "always bet x% strategy".
Each coin flip you can choose how much to bet, and the percentage correlates to the
risk/reward profile. Saying that refusing to pursue any single technology is betting 100%
makes no sense, because you are likely pursuing other, less risky and less rewarding
technologies, which is certainly not a 100% bet, but also not a 0% bet.
REPLY (1)
Erusian Mar 7
So while I don't disagree with this per se the logic works both ways. While not pursuing AI
frees up resources to use in non-AI research likewise pursuing AI creates resources to use
in other research. So if you broaden it out of being a coin flip, an isolated question of a
single technology, you can never reach 100% anyway. You've basically destroyed the
entire concept. Which is fine, actually. It's a bad concept to start with. But it doesn't result
in an anti-AI argument.
REPLY (1)
Erusian Mar 7
In which case you're making a general argument against all technological progress?
Luddism is certainly a thing but I don't think it's very supportable. Of course, Luddites
disagree.
REPLY (1)
Pete Mar 7
My post is asserting that stopping technological progress because of risk-aversion is
definitely not the equivalent of the strategy of betting 100% in a Kelly bet as you
claimed, but rather the very opposite extreme - the equivalent of betting 0% in a
Kelly bet.
Erusian Mar 7
No, it isn't. It seems like you believe loss aversion actually averts losses which is
often not the case. Just because that's the intention doesn't mean it's the result.
You are investing 100% of resources in an absolutist strategy and the fact it's do
nothing instead of do something doesn't actually make you safer.
REPLY
I'm not arguing that we should just let AI development go full steam. I'm genuinely trying
to figure out what would be a reasonable compromise solution.
And regarding people in charge, Holden Karnofsky argues that they are not in a good
position to regulate AI: https://www.cold-takes.com/how-governments-can-help-with-
the-most-important-century/
REPLY (1)
Xpym Mar 8
Well, Yudkowsky's criterion is that "if you can get a powerful AGI that carries out
some pivotal superhuman engineering task, with a less than fifty percent change of
killing more than one billion people, I’ll take it", a pretty "generous" bound. Of course,
the main issue with this discourse is that pretty much nobody who matters agrees
with him that the mainline "muddle through" scenario is overwhelmingly likely to kill
everyone, and so the disagreement seems irreconcilable.
REPLY
Ch Hi Mar 7
What do you mean "deploy"? If it's a superhuman AI, are you contemplating keeping one copy
on tape? Or what?
Otherwise this is the "AI in a box" argument, which might be what you intend. Are you
assuming that if one party doesn't active an superhuman AI, nobody else will either? That
seems like a rather unsound assumption. Who's going to stop them, and how will they know to
stop? What about black market copies? What about hackers? What about rival groups, who
might see an advantage?
A program is not a car. It can escape over any internet connection. OTOH, like a car or a
telephone, it may be developed in multiple places at the same time. (Check into patent office
lawsuits.)
So what does "deploy" mean. If we're talking about something that's a self-motivated
intelligence, then I think it's got to mean "on active storage OR on a system connected to the
internet, even indirectly". It can't just mean "controlling a public facing web page", though that
is certainly one kind of deployment.
REPLY
Bugmaster Mar 7
Approximately negative 10..20 years, since superhuman AI is pretty commonplace. For
example, the addresses on your snail-mail letters are routinely scanned by an AI that is
superhumanly good at handwriting recognition. Machine translation systems still kind of suck
at the quality of their translations, but are superhumanly good at quantity. Modern image-
generation programs are still subhuman compared to top artists, but will easily outperform the
average human at rendering art. Most modern computer chips are designed with the aid of AI-
powered circuit-routing software; no human could conceivably perform that task. And I could
keep going in this vein for a while...
REPLY (1)
Bugmaster Mar 7
Oh, well, in that case super-human AI does not currently exist, and probably won't
exist for a very long time, since no one knows how to even begin building one. On the
other hand, humans do exist; they can do anything a human can do at least as well;
and some of them are quite malicious. Should we not focus on stopping them,
instead of a non-existent AI ?
REPLY
(*) and indeed, forgive me to say that the impact of sheer speed of all digital things will be a
recurringtheme of my own substack
REPLY (1)
TGGP Mar 7
Who saw a nuclear plant being built during the Manhattan Project?
REPLY (1)
TGGP Mar 7
I'm saying that in fact there have been secret nuclear weapons programs which rivals
didn't know about until the nuclear test was conducted.
REPLY (1)
Bartleby Mar 7
If the price of cheap energy is a few chernobyls every decade, then society isn’t going to allow it.
Mass casualty events with permanent exclusion zones... you can come up with a rational calculus
that it’s a worthwhile trade off, but there’s no political calculus that can convince enough people to
make it happen. So as an example, nuclear energy actually makes the opposite argument he wants
it to.
REPLY (3)
Victualis Mar 7
This seems to be an outcome of a strongly individualist society with frozen priors, but the
indications are that people under 30 are much less individualistic than their elders currently
running things. It seems possible to me that by 2050 a couple of large scale nuclear disasters
every year might be an accepted cost of living in a good society, especially once the 1970s
nuclear memes and prevention at all costs have been replaced by practical remediation action
and a more pragmatic view of tradeoffs.
REPLY
Coal power plants (and high altitude locations like plane trips and Denver, CO) have higher
radiation levels than nuclear power plants.
That we don't allow it is a choice. Almost every other source of power has killed more people
than nuclear (I think solar is the only exception - even wind has killed more - and most have
killed many orders of magnitude more people).
REPLY (2)
Ch Hi Mar 7
Solar has actually killed lots of people. Usually installers doing roof-top installations.
REPLY (2)
JamesLeng Mar 7
If you're counting construction accidents only tangentially related to the actual power
source, probably ought to also count anyone who ever died in a coal mine, which I'm
pretty sure still leaves solar coming out very far ahead.
REPLY (1)
Ch Hi Mar 7
Well, yes, but I was comparing it with nuclear. There things are a lot closer.
REPLY (1)
Erwin Mar 10
Did you ever have a closer look at uranium mines?
REPLY
Bartleby Mar 7
This is the rational case, but this is a pretty safe space to make it. I don't think it's a
political case, because there's a unique horror-movie level of fear in society surrounding
nuclear power. That could change, but it won't change fast enough to matter to us.
That's why it isn't really a "choice," or rather it isn't really an option given the reality. I
don't think it makes sense to treat it like it could be one if we just converted the world to a
rationalist point of view. Clearly, that's not in the cards.
If I were going to try to make a rational case against nuclear energy, I'd probably point out
a danger that didn't seem realistic until recently- unpredictable conventional warfare at a
nuclear power plant. We got lucky this time, but I don't know how you can argue against
that being a growing possibility. I'm no expert but I imagine the outcome of a conventional
bomb hitting a reactor, in error or not, would be worse than a conventional bomb dropped
on any other power generation technology (except maybe certain power generating
dams.)
REPLY
Gbdub Mar 7
There has been exactly one Chernobyl over many decades, and that’s the only nuclear
accident that seems to have definitely killed any members of the public. It was also the result
of profoundly stupid design and operating decisions that nobody would do again precisely
because of Chernobyl.
Really? I would have said gasoline and nuclear were huge net disbenefits. Take gasoline out of the
equation and you take away the one problem nuclear is a potential solution for.
(I think. No actual feel for what the global warming situation would be in a coal yes, ICEs no world).
REPLY (5)
Pete Mar 7
I have a feeling that without ICE we wouldn't have the farming industrialization which enables
feeding the world and having most people not work in farming. IMHO cost of never starting to
use ICE would be famine-restricted population and a much worse standard of life for billions of
people than even the IPCC climate report worst case scenarios expect.
REPLY (1)
Gbdub Mar 7
Nitrogen fertilizer is critical for farming at our current scale, and it is sourced
primarily from natural gas.
REPLY (1)
Gbdub Mar 7
It can be, but whether it ever would have happened without fossil fuel is a
question.
REPLY (1)
My guess is that the alternate hypothetical that you are citing, with no
fossil fuels - no natural gas, or oil, or coal is far more drastic. I've read
claims that the industrial revolution was mainly a positive feedback
loop between coal, steel, and engine production. It wouldn't surprise
me if a non-fossil-fuel world would be stuck at 18th century technology
permanently. With the knowledge we have _now_, I think there would
be ways out of that trap, nuclear or solar, but they might well never get
to that knowledge.
REPLY
Steam powered shipping is also more expensive than that powered by residual fuel
oil, so poor parts of the world would have had less access to imported food in times
of harvest failure. Harvest failures would have been more frequent because of the
high running costs of steam powered pumps for irrigation and steam transport.
Famines would have been more of a feature. Whether the results would have been
worse than "IPCC climate report worst case scenarios", I do not know.
REPLY (1)
Could you elaborate on that? Is that due to manual handling of solid fuels? Or
something else? I vaguely recall that some coal burning systems use coal slurry
to handle it much like a liquid. I agree that if steam power was unavoidably much
more labor intensive than ICEs, then that has all the adverse downstream
implications that you cite.
REPLY (1)
How do you figure that? The principal use case for gasoline is running motor vehicles, which
fission will never be a good use case for even theoretically, let alone in practical reality.
REPLY (1)
Ch Hi Mar 7
Sorry, but you're wrong. The society would need to be structured a bit differently, but
electric cars were developed in (extremely roughly) the same time period as gasoline
powered cars. And there were decent (well, I've no personal experience) public transit
systems in common use before cars were common. Most of the ones I've actually seen
were electric, but I've seen pictures of at least one that was powered by a horse. It was
the Key System E line from the ferry terminal up into the Berkeley hills, where the
Claremont hotel is currently located.
REPLY (1)
To be a bit more clear, I'm not saying that electric cars, powered by energy that could
have been generated at a nuclear plant, can't be a good alternative for ICE cars;
we've pretty well proven that they can by now. I'm saying — in response to Robert
Leigh's claim that gasoline (and thus by implication, ICE engines) is the *only*
problem where nuclear power is a good alternative — that you can't put a nuclear
reactor on a car as a power source. If you put a nuclear reactor in a nuclear power
plant, on the other hand, you're solving a lot more problems than can be reasonably
addressed by gasoline. So either way, I don't see where he's coming from on this.
REPLY
WaitForMe Mar 7
But without gasoline how would you power all the vehicles we use? I think without out we
would be a lot poorer, which hopefully makes up for its bad effects.
REPLY
And as I said above, the more you start from pure C (e.g. coal) instead of a mixture of
C and H (e.g. nat gas), the worse you make your CO2 emissions problem. So in a
world without liquid hydrocarbons, I think CO2 emissions would've risen faster and
sooner, not the other way around.
-----------------
Can somebody explain this part? Isn't this mixing expected returns from a _single_ coin flip with
expected returns from a series of coin flips? If you start with $1 and always bet 100%, after t steps
you have 2^t or 0 dollars - the former with probability 2^-t . So your expected wealth after these t
steps is $1, which is pretty much the same as not betting at all (0% each "step").
Math aside, it's pretty obvious that betting 100% isn't advisable if you are capped at 100% returns.
I'm sure even inexperienced stock traders (who still think they're smarter than the market) would
be a lot less likely to go all in if they knew their stock picks could *never* increase 5x, 10x, 100x... If
doubling our wealth at the risk of ending humanity is all that AI could do for us, sure, let's forget
about AI research. But what if this single bet could yield near-infinite returns? Maybe "near" infinite
still isn't enough, but it's an entirely different conversation compared to the 100% returns scenario.
REPLY (2)
Tatterdemalion Mar 7
Scott specifies a 75% probability of heads.
REPLY
Pete Mar 7
> If you start with $1 and always bet 100%, after t steps you have 2^t or 0 dollars - the former
with probability 2^-t
No, since the assumption is that you can predict the coin flip is better than chance, specifically
75%, so the probability of the former scenario is much higher than 2^-t.
REPLY (1)
Malte Mar 7
Ah, of course. Knew I missed some variable. Thanks.
REPLY
As I look at it, AI sits at the intersection of statistics and computer science. We could subdivide
areas of computer science further into elements like data engineering and deep learning. So, at
what point would you use the above logic to prevent research into certain areas of compsci or
statistics under the premise of preventing catastrophe?
I don't think this is splitting hairs either - we already have many examples of ML and Deep Learning
technologies happily integrated into our lives (think Google maps, Netflix recommendations etc),
but at what point are we drawing the line and saying "that's enough 'AI' for this civilisation" - how
can we know this and what are we throwing away in the interim?
REPLY (1)
Ch Hi Mar 7
Well, it might have been reasonable to draw a line saying "Slow down and consider the
effects" before Facebook was launched. I wouldn't want to stop things, but I think a lot of our
current social fragmentation is due to Facebook and other similar applications.
Note that this ISN'T and argument about AI, but rather about people and their motivational
systems. People have a strong tendency to form echo chambers where they only hear the
comments of their ideological neighbors. and then to get tribal about it, including thinking of
"those folks over there" as enemies.
REPLY (1)
Jordan Mar 7
I guess the question would be whether one could have seen the far-reaching effects of
Facebook and social media before the damage had been done.
Same here - we might already have crossed tipping point and we don't know it
REPLY (1)
Ch Hi Mar 7
There are actually small indications that we *have* crossed a tipping point. Not of AI,
but of the way humans react to conversational programs. But we've been working
towards that particular tipping point quite diligently for years, so it's no surprise that
when you add even a little be more intelligence or personalization on the other end
you get a strong effect.
REPLY
TGGP Mar 7
America is not 100% suburban.
REPLY (1)
TGGP Mar 7
Yes, they replaced roads designed for horses with roads designed for more modern
vehicles.
REPLY (1)
TGGP Mar 7
Automobiles predate 1940, streets had already been replaced by then. In
1915 there were 20 million horses while the human population was roughly
100 million. Many people would commute via horsedrawn transit.
REPLY
thefance Mar 8
The real issue is risk of ruin. Modernism can be reverted because it's not an existential risk.
REPLY (1)
thefance Mar 8
I'm saying "betting 100% on modernity" isn't really analogous to "betting 100% on
AI" because it's not a 100% wager so much as a 100% level of confidence. I think
Modernism has downsides too, but it hasn't irrevocably bankrupted civilization yet.
There's still time to turn the ship around if we so choose.
REPLY
gregvp Mar 8
There is a concept in economics called "revealed preference". The idea is, don't ask people
what they prefer, look at what they buy. That tells you their real preferences.
The parts of the US that are growing are the "sprawl" parts: various cities in Texas, and
Atlanta. Especially Atlanta.
Unpalatable as it may be to you and me, that tells you what most people want. The tyranny of
the majority may be oppressive, but it's not nearly so oppressive as other tyrannies.
REPLY (1)
Tatterdemalion Mar 7
Don't confuse the Kelly criterion with utility maximisation (there kind of is a connection, but it's a bit
of a red herring).
If you have a defined utility function, you should be betting to maximise expected utility, and that
won't look like Kelly betting unless your utility function just happens to be logarithmic.
The interesting property of the Kelly criterion (or of a logarithmic utility function compared to any
other, if you prefer) is that if Alice and Bob both gamble a proportion of their wealth on each round
of an iterated bet, with Alice picking her proportion according to the Kelly criterion and Bob using
any other strategy, then the probability that after n rounds Alice has more money than Bob tends to
1 as n tends to infinity.
That doesn't tell you anything about their expected utilities (unless their utility functions happen to
be logarithmic), but it's sometimes useful for proving things.
REPLY (1)
If it's the former, Bob could do something like always betting so that he will have $0.01 more
than Alice if he wins, until he does win, and then always betting the same as Alice. This would
make him very likely to come out ahead of Alice, at the expense of a small probability of going
bankrupt.
REPLY (1)
Tatterdemalion Mar 7
Oh god, now you're asking. I'm on a phone, and hate reading maths on it, so check this on
Google, but there's an obvious-but-weak form of it where Alice and Bob are each
constrained to bet the same proportion in every round (take logs and use the CLT), and I
think there are stronger, more general versions too.
REPLY
Tatterdemalion Mar 7
I think this sort of argument only makes sense if the numbers you plug in at the bottom are broadly
correct, and the numbers you're plugging in for "superintelligent AI destroys the world" are
massively too high, leading to an error so quantitative it becomes qualitative.
REPLY (1)
Ch Hi Mar 7
I don't think we have a reasonable way to estimate how likely a self-motivated super-intelligent
AI is to destroy the word. So try this one: How likely is a super-intelligent AI that tries to do
exactly what it is told to do to destroy the world? Remember that the people giving the
instructions are PEOPLE, and will therefore have very limited time horizons. And that it's quite
likely to be trying to do several different things at the same time.
REPLY
Emma_M Mar 7
The issue, for me anyway, is not that old Nuclear activists were unable to calculate risks properly.
The issue is they basically didn't know anything about the subject to which they were very worried
about, partially because nobody did. In the end, yes, they made everything worse. The world might
have been better served should the process of nuclear proliferation been handled by choosing
experts through sortition.
The experts in AI risk are *worse than this.* The AI is smarter than I am as a human? Let's take that
as a given. What does that even mean? There is a very narrow band of possibilities in which AI will
be good for humanity, and an infinite number of ways it could be catastrophic. There's also an
infinite number of ways it could be neutral, including an infinite number of ways it could be
impossible. The worry is itself defined outside of human cognition, in a ways that make the issue
even more difficult than they otherwise would be, so how are you supposed to calculate risk if you
can't even define the parameters?
REPLY (2)
Ch Hi Mar 7
It is quite clear that human equivalent AI is possible. The proof relies on biology and CRISPR,
but it's trivial. And it is EXTREMELY probable that an AI more intelligent than 99.9% of people
is possible using the same approach. Unfortunately, there are very good grounds to believe
that any AI created in that manner would be as self-centered and have as short a planning
horizon as people generally do. This is just an existence argument, not a recommendation.
AI is not a particular technology. Currently we are using a particular technology to try to create
an AI, but if that doesn't work, there are alternatives. An at least weakly superhuman AI is
possible. And if you don't define "good for humanity" then the only good I can imagine is
survival. It's my opinion that given the known instability of human leaders and the increasing
availability of increasingly lethal weaponry, if leadership of humanity is not replaced by AIs, we
stand a 50% (or higher) chance of going extinct within the century, and that it will continue
increasing. And AI is, itself, an existential threat, but if we successfully pass that threat, the AI
will act to ensure human survival. I take this to be a net good. It also is quite unlikely to derive
pleasure from inflicting pain on humans. (The 50% chance is because I might not like us
enough to ensure our continued existence, and might find us bothersome...and is a wild
guess.)
Once people start invoking infinities, I start doubting them. Perhaps you could rephrase your
argument, but I think it's main flaw is that it doesn't consider just how dangerous humans are
to human survival.
REPLY
thefance Mar 8
One of the things I learned from LW (if I'm remembering correctly) was about the multi-armed
bandit problem. Which is a situation where you need to experiment with wagers just to
discover the payoff structure. Without hindsight, the payoff matrix is a total blackbox.
Therefore, whether the "optimal" strategy is risky or conservative is anyone's guess, a priori.
I do think a lot of AI mongering is a result of not understanding the nature of intelligence. If you
can manage to put constraints on it though, like how the study of thermodynamics bounds our
expectations of engines, AI become less scary.
REPLY (1)
Emma_M Mar 8
I think you are right that a part of the issue is not understanding the nature of intelligence.
But I think that's just one aspect. Not understanding the nature of intelligence means we
also don't have a good account of psychology, which means we also don't have a good
account of neurology, nor philosophy of mind. Put another way, we don't know "where"
intelligence comes from, how it exactly relates to neurology, how that relates to decision
making, or how any of that is supposed to apply to something explicitly non-human and
"more intelligent than humans" even if we did know all of that.
I can fully admit AI might kill us all. But I think if it does, it's more likely to be because
people with the priorities of Scott Alexander are extremely worried about it, and, through
ignorance, are going to give it the machine equivalent of a psychological issue, like say,
psychopathy or low-functioning Autism.
thefance Mar 8
I have an alternate perspective.
I have this pet theory that the reason intelligence evolved is because it allows us to
simulate the environment. I.e. simulation allows us to try risky actions in our mind,
before we try them irl. It's often calorically cheaper than putting life and limb on the
line. This dovetails with my hunch that life is just chemical disequilibrium. It dovetails
with my hunch that what set humans apart from apes was cooking. And it dovetails
with why I think humanity acquired a taste for stories/religion/sports. It's
thermodynamics, all the way down.
If true, then Carnot's Rule bounds the threat of AI. Just like it bounds everything else
in the universe. A jupiter brain might have orders of magnitude more processing
power than a human brain. But "intelligence vs agency" is sigmoidal, and humanity is
already rightward of the inflection point. Thus, the advantage that a jupiter brain
offers over a human brain is subject to diminished returns. AI still might do scary
things, but it's unlikely to do things that couldn't already be accomplished by an evil
dictator. I suspect most skeptics of the singularity share this intuition, but can't find
the right words.
thefance Mar 9
I don't think there's any algernic drawbacks. I think the bottleneck is just
calories. Human brains already represent a big capex and big opex. Too big
for the diets of most species to afford. Meanwhile, people with 200 IQ are
like giraffes, in that the problems they can solve represent a tiny set of
high-hanging fruit.
REPLY
This is not to say that averting AGI is impossible, just that it would require solving an extremely
difficult coordination problem. You'd need to not only convince every major power that machine
learning must be suppressed, but also to assure it that none of its rivals will start working on the AI
equivalent of Operation Smiling Buddha.
REPLY
konshtok Mar 7
what are the chances of a newly developed AI having both the ill intent and the resources to kill us
all?
REPLY (2)
Pete Mar 7
I won't comment on the chances of "ill intent" part, however, if we simply look at the current
state of cybercrime, it should be assumed that any newly developed ill-intended AI connected
to the internet which has the capability equivalent to (or better than) a modestly skilled
teenage hacker and perhaps the time to find a single vulnerability in some semi-popular
software, then it would be able to amass in a span of some weeks/months: (a) financial
resources amounting to multiple millions of dollars equivalent in cryptocurrencies; (b) a similar
scale of new, external compute power and hardware to run it's "thinking" or "backups" on the
cloud; (c) a dozen of "work for home remotely" people for various mundane tasks in the
physical world, just as cybercriminals hire 'money mules'; and (d) a few shell companies
operated by some lawyer to whom it can mail orders to arrange purchases or other actions
which require a legal persona.
Up to this point there's no speculation, this is achievable because that was achieved by
multiple human cybercriminals. Now we can start speculating whether that is sufficient
resources to kill us all depends on the smartness of the agent, however, I'd guess so? Those
assets would be sufficient to organize the making and launching of a bioweapon, if the AI
figured out how to make one.
REPLY (1)
Gbdub Mar 7
But the AI would also have to do all that (and more importantly, LEARN how to do all that)
without tipping off its creators to the fact that it’s gone off the rails, and then win the
ensuing struggle. And the humans fighting back against the AI will have less than
superhuman but very powerful AIs on their side.
REPLY (1)
And after all, assuming that no very special hardware is needed, once the model
gains its first money, it can rent cloud hardware to run a copy of itself outside of any
possibility of supervision by the creators.
REPLY (1)
Gbdub Mar 8
Cybercrime is obnoxious but it’s hardly an existential threat and it’s generally a
known attack vector. At some point the AI is going to have to start significantly
manipulating the physical world to kill people and that opens up a ton of chances
to get caught.
AIs as we know them are can be given a huge training database but they are still
“learn by doing” agents - they need some sort of feedback to self improve. If
they are doing something their creator did not train them on, especially if it’s
something no human has ever done, they are going to have to experiment in the
“real world” a bit. This should eventually get discovered unless the AI authors
are either colluding or completely asleep at the wheel.
Razorback Mar 8
If we develop some AI that is caught being naughty, and we successfully
shut it down. Is that the end of AI research? Do we all agree never to try
again? I don't think we will. Eventually our adversary will be a
superintelligence.
You are an amateur chess player. You have developed an opening that beats
all your friends. You can't see how anyone could beat it. You will face
Magnus Carlsen in a game soon.
I tell you how I'm almost sure you will lose. You claim that you've thought of
all possible counters to your special opening, and you haven't found any. I
still think he will beat you. You ask me to give some examples of how could
he do so. I look at your opening and since I don't know much about chess, I
can't find any problems with it. I might give some suggestions but you
counter that you've already thought of that. I can't find flaws in your
strategy.
Gbdub Mar 8
It’s not clear to me why a newly minted super intelligent AI is Carlsen in
that scenario rather than the amateur (with perhaps an IQ much higher
than Carlsen’s.
https://mobile.twitter.com/jmkorhonen/status/1625095305694789632
REPLY (1)
Brett Mar 7
The writer Austin Vernon had a pair of good pieces on nuclear as well:
https://austinvernon.site/blog/nuclear.html
https://austinvernon.site/blog/nuclearcomeback.html
There were a specific set of conditions that favored nuclear power until the 1980s, and it
wasn't just regulatory. They benefited from not having to compete in deregulated electricity
markets, a lot of the early plants were made rather cheaply and weren't exceptionally reliable
(upgrades later improved that but also made nuclear more expensive), and they didn't have to
compete with cheap gas power especially.
Nuclear also benefits from regulation. It's how they get their liability protection from
meltdowns - if they actually had to assume full liability for plant disasters, it's questionable
whether they could afford the cost of insurance.
REPLY (1)
Pete Mar 7
I wouldn't say that it's a nuclear-specific liability protection - if e.g. coal plants would have
to assume full liability for their consequences, then the cost of coal-driven electricity
would be even larger as the nuclear insurance you mention, since normal operation of
coal plants causes more cancer than any reasonable estimate of nuclear plant meltdown
risk, and that's ignoring any carbon/warming effect.
Of course if we suddenly start charging one type of energy (e.g. nuclear) for its negative
externalities, then it becomes uncompetitive - but if we'd do that for all means of
electricity generation, I think nuclear would be one of the options that would workout.
REPLY (1)
Gbdub Mar 7
Right, and this is why anybody who thinks the solution to climate change involves
carbon tax but still opposes nuclear ought to smack themselves on the head and say
“why didn’t I think of that!” The logic is right there.
REPLY
Chris K. N. Mar 7
I agree with you on AI, but not necessarily on nuclear energy (or even housing shortages). Partly
because I don't agree that "all other technologies fail in predictable and limited ways."
Yes, we're in a bad situation on energy production and lots of other issues, and yes, we are reacting
too slowly to the problems.
But reacting too slowly is pretty much a given in human affairs. And, I'm not sure the problems we
are reacting too slowly to today, are worse than the problems we would be reacting too slowly to if
we had failed in the opposite direction.
To continue with nuclear as an example: I'm generally positive to adding a lot more nuclear power
to the energy mix. But I would like to hear people talk more about what kind of problems we might
create if we could somehow rapidly scale up production enough to all but replace fossil fuels?
(≈10X the output?) And what kind of problems would we have had if we started doing that 50 years
ago?
With all the current enthusiasm for nuclear energy, I wish it were easier to find a good treatment of
expected second- and higher-order effects of ramping up nuclear output by even 500% in a
relatively short period of time.
Sure, nuclear seems clean and safe now. But at some point, CO2 probably seemed pretty benign,
too. After all, we breathe and drink it all day long, and trees feed off it. I know some Cassandras
warned about increasing the levels of CO2 in the atmosphere more than a hundred years ago, but
there was probably a reason no one listened. "Common sense" would suggest CO2 is no more
dangerous than water vapor. It was predictable, but mostly in hindsight.
So what happens when we deregulate production of nuclear power while simultaneously ramping
up supply chains, waste management, and the number of facilities; while also increasing global
demand
Expand fullfor nuclear
comment scientists, for experts in relevant security, for competent management and
REPLY (1)
Chris K. N. Mar 8
I’m not sure I understand the first part of your comment. What is existential risk seems
pretty self-evident to me.
To me it means: Risk of an event or series of events that would cause the death of a large
share of humanity – billions of people – and trigger the collapse of civilization. Examples
are large asteroid impacts, nuclear war at a certain scale, lethal enough pandemics,
severe enough climate change…. You seem to be saying that these risks are often
exaggerated, and so we don’t know which ones we are right to care about? If I got that
right, I would think that any non-zero chance of something like that happening seems like
risk worth taking seriously.
We are creative, sure. But pretty much every solution we come up with creates a new
problem (not always as serious as the original problem, but often enough) when scaled to
a population level. The new problem requires a new solution, which creates new
problems. It is almost a natural law, related to evolution: Our creativity is an adaptation
mechanism, and adaptation typically leads to selection pressures (on an individual or
group level).
When populations are small and local, and solutions and technology is weak and local,
that doesn’t affect the natural balance of the planet much. But once everyone on the
planet is a single population, and problems and solutions have global impact, our
creativity and the solutions themselves become existential risks (imagine if we got the
COVID vaccine tragically wrong, and everyone who took it will spontaneously combust, or
if we eradicate some invasive species of mosquito somewhere, so as to get rid of some
disease, just to realize we triggered something that makes eco-systems start coming
Expand full comment
apart at the seems, or if we do gain of function research and ... you know.)
REPLY
Has anyone put together an AI research equivalent of the IPCC climate projections? Basically laying
out different research paths, from "continued exponential investment in compute with no breaks
whatsoever" to "ban anything beyond what we have today". This would enable clear discussion, in
the form "I think this path leads to this X risk, and here's why". Right now the discussion seems too
vague from a "how should we approach AI investment in our five year plan" perspective, and that's
where we need it to be imminently practical.
REPLY (3)
Ch Hi Mar 7
When you ask for that remember that the IPCC routinely trimmed excessively dangerous
forecasts from their projections...for being out of line with the consensus. (They may also have
trimmed excessively conservative forecasts, but if so I didn't hear of that.)
REPLY (1)
1) create reference language to encourage AI researchers to adopt as safety policy (e.g. define
exactly what we want OpenAI to agree to, an gradations of commitments)
2) work toward policy language to put in international agreements with other countries. As with
climate change, US policy in isolation isn't enough
REPLY
Victualis Mar 7
AI research is not following a linear trajectory so it's difficult to do practical planning.
REPLY (1)
3) Specific alignment testing required for (a) public access or (b) moving to work on the
next model
None of these inherently make research safer. But they encourage transparency, and
provide opportunities for routine press coverage in a way that can pressure companies to
care. When there's a big safety recall on cars, it makes the news and is bad PR for car
companies; we want those types of incentives on AI companies.
We can't directly plan for the _results_ of the research -- that's the nature of research --
but we can push for clear disclosure of both plans and policy, and discuss how different
safety policies are likely to impact research rate.
REPLY (1)
That looks pretty close to actual reasoning, from a lot of people's perspective.
REPLY
TGGP Mar 7
"These are words with 'D' this time!"
https://www.youtube.com/watch?v=18ehShFXRb0
REPLY
Lupis42 Mar 7
The people who opposed nuclear power probably put similar odds on it that you put on AI. If your
"true objection" is that this is a Kelly bet with ~40-50% odds of destroying the world, your
objection is "the proponents of <IRBs/Zoning/NRC/etc> are wrong, were wrong at the time for
reasons that were clear at the time, and clearly do no apply to AI".
Otherwise, we're back to "My gut says AI is different, other people's guts producing different
results are misinformed somehow"
REPLY
Hello Mar 7
A nuclear energy expert illustrates how lots of own-goals by the industry and regulatory madness
prevented and prevents widespread adoption. “The two lies that killed nuclear power” is among my
favorite posts. https://open.substack.com/pub/jackdevanney?r=lqdjg&utm_medium=ios
REPLY (1)
Josaphat Mar 7
“regulatory madness”
The funny thing is I keep hearing that meme repeated but never hear exactly what regulations
they want deleted.
I suspect any actual response would be vague like a SA post on bipolar treatment or a Sarah
Palin “all of them”.
The other funny thing is that 4 of the 5 nuclear engineers I’ve discussed the topic with are in
the nuclear cleanup business.
REPLY (2)
SimulatedKnave Mar 7
Go look at the guy's substack, then. There's examples.
REPLY
Hello Mar 7
One big one is ALARA, or “as low as reasonably achievable” wrt radiation. Obviously, this
is a nebulous phrase and gets used to apply ever increasing pressure and costs to
operators to an extreme. Another is LNT, or “linear no threshold”, which essentially
ignores the dose response relationship to radiation over time.
REPLY
Matt Mar 7
This is a similar line of reasoning Taleb takes in his books antifragile and skin in the game. Ruin is
more important to consider than probabilities of payoffs, especially if what's at risk is a higher level
than yourself (your community, environment etc. ). If the downside is possible extinction then
paranoia is a necessary survival tactic
REPLY
Btw whichever be the new discovery, unless population explosion is controlled pollution cannot be.
REPLY
Most of these counterexamples are good ones, but the YIMBY folks are actually making the same
basic mistake that the mistaken people in the counterexamples made that made them wrong:
they're not looking beyond the immediately obvious.
The homelessness epidemic which they speak of is not a housing availability or affordability
problem. It never was one. Most people, if they lose access to housing or to income, bounce back
very quickly. They can get another job, and until then they have family or friends who they can
crash with for a bit. The people who end up out on the streets don't do so because they have no
housing; they do so because they have no meaningful social ties, and in almost every case this is
due to severe mental illness, drug abuse, or both.
Building more housing would definitely help drive down the astronomical cost of housing. It would
be a good thing for a lot of people. But it would do very little to solve the drug addiction and mental
health crises that people euphemistically call "homelessness" because they don't want to confront
the much more serious, and more uncomfortable, problems that are at the root of it.
REPLY (3)
dionysus Mar 7
I've seen this argument before, and I believe it partially. But an opponent would say that
bouncing back is a lot easier with cheaper housing than with expensive housing, both for those
with social ties and those without. People who despair at how they're going to bounce back
might start to abuse drugs, which in turn might aggravate mental illness.
As evidence, opponents say that housing cost is the number one predictor of homelessness
(e.g. https://www.latimes.com/california/story/2022-07-11/new-book-links-homelessness-city-
prosperity).
Brett Mar 7
This. I think it's a big deal if a drug addict or mentally ill person can at least get a private
room for rent (especially as part of a broader set of support services) versus being out on
the street or in a dangerous shelter.
REPLY
I would say that correlation does not imply causation. There's another factor at work here
which the article doesn't mention: migration.
The authors of the study can look at prices all they want, but the statistic they don't seem
to be looking at is "what percentage of the long-term homeless population is comprised
of individuals who do not have problems with drug abuse or mental illness?"
REPLY
And a big anonyous *wealthy* city -- where the price of real estate is sky high -- is even
better, because *those guys* are probably going to have some welfare programs, too.
REPLY
Brett Mar 7
I don't think you're wrong about people bouncing back quickly most of the time, especially if
they have a job or family support network. But at the macro-scale, it really is about housing
affordability.
Rates of homelessness track consistently with housing affordability issues, not rates of drug
addiction or mental illness. As the piece I'm linking to below points out, Mississippi has one of
the most meager public assistance programs in the country for mental health - and yet one of
the lowest rates of homelessness in the country. West Virginia, meanwhile, is one of the worst
states when it comes to drug addiction - but also has one of the lowest homelessness rates in
the country.
We even saw it with the deinstitutionalization movement. They used to think that was the
source of a lot of homeless people, but most of them apparently did find cheap housing - even
if it was stuff like rooms for rent and dilapidated SRO stuff.
https://noahpinion.substack.com/p/everything-you-think-you-know-about
REPLY (1)
Brett Mar 7
The piece itself actually talks about this. It's mostly locals, not migrants - 65% of LA
County homeless have lived in the area for 20 years or more, and 75% of them lived
in LA before becoming homeless.
REPLY (1)
dionysus Mar 8
Please elaborate. What kind of problem does it represent, and how do you know?
REPLY
The Kelly Criterion says "Don't bet 100% of your money at once". But it also says it's fine to bet
100% - or even more than 100% - as long as you break it into smaller iterated bets.
To analogise to AI research, the Kelly Criterion is "Don't do all the research at once. Do some of the
research, see how that goes, and then do some more".
There's not one big button called "AI research". There's a million different projects. Developing
Stockfish was one bet. Developing ChatGPT was another bet. Developing Stable Diffusion was
another bet.
The Kelly Criterion says that as you make your bets, if they keep turning out well, you should keep
making bigger and bigger bets. If they turn out badly, you should make smaller bets.
To analogise to nuclear, the lesson isn't "stop all nuclear power". It's "Set up a bit of nuclear power,
see how that goes, and deploy more and more if it keeps turning out well, and go more slowly and
cautiously if something goes wrong."
REPLY (1)
thefance Mar 8
the denominator of "100%" is your current bankroll. Not some predetermined runway.
REPLY
And if you look at experts who have considered the problem they aren't anything like unanimous in
agreeing on the danger much less pushing that kind of degree of alarmism.
And that's not even taking account of the fact that, fundamentally, the AI risk story is pushing a
narrative that nerds really *want* to believe. Not only does it let them redescribe what their doing
from: working as a cog in the incremental advance of human progress to trying to understand the
most important issue ever (it's appealing even if you are building AIs) it also rests on a narrative
where their most prized ability (intelligence) is *the* most important trait (it's all about how smart
the AI is because being superintelligent is like a super-power). (obviously this doesn't mean ignore
their object level arguments but it should increase your prior about how likely it is many people in
the EA and AI spheres would be likely to reach this conclusion conditional on it being false).
REPLY (3)
Tom J Mar 7
There are also plenty of well-documented physical and theoretical constraints on the
capabilities of any algorithm, so all this speculation basically boils down to "imagine an
algorithm so infinitely smart that it is no longer bound by physical reality." And while I agree
that an algorithm unbound by the laws of physical reality would be pretty scary, I'm pretty sure
those laws will continue to apply for the foreseeable future.
REPLY
Emma_B Mar 7
My impression is also that AI risk is in fact something especially appealing for rationalists,
because AI is a fascinating idea and intelligencde related subject and also probably because of
a tendency towards anxiety.
REPLY (1)
Tom J Mar 7
Yeah, it all seems kind of built on this Dungeons and Dragons sort of model of the world
where a high enough Intelligence stat lets you do anything (and a suspicious reluctance to
actually learn any computer science and apply it to the galaxy-brained thought
experiments we're so busy doing).
REPLY (1)
Personally, I think they are underestimating the fact that 'natural' problems tend to
either have very low complexity or very high complexity and, in particular, the kind of
Yudkowsky evil god style AI would require solving a bunch of really large problems
that are at least something like NP complete (if not PSPACE complete). On plausible
assumptions about the hardness of NP those just aren't things that any AI is going to
be able to do w/o truly massive increases in computing power (which itself may make
the problems harder).
What's difficult is that its very hard to make this intuition rigorous. I mean my sense is
that surreptitiously engineering social outcomes with high reliability (knowing that if I
say X, Y and Z I can manipulate someone into doing some Q) is really
computationally difficult even if simple manipulation with relatively low confidence is
relatively easy. But it's hard to translate this intuition into a robust argument.
REPLY (1)
Tom J Mar 8
Yeah to be fair I think that's part of it--there's a vast set of problems that we
intuitively know are quite complex, but they're also hard (if not impossible) to
formally define, so there seems to be a certain approach, popular in these
circles, that concludes they're meaningless or trivial. But if you can't even
formally define the problem, throwing more compute at it won't get you any
closer to solving it.
REPLY (1)
I think the problem is more that most problems in the real world are really
complex in the sense of having many different parts that can be optimized. I
mean there is a way in which asking what's the most efficient solution to the
traveling salesman is a simple problem that asking what's the most efficient
way to write the code for the substack back end is complex. Even if we
specify that we mean minimize the number of cycles it takes on such and
such processor to service a request (with some cost model for each read
from storage) so the problem is fully formal it's such a complex problem
that any solution we find will admit tons of ways to improve on it.
Even for simple problems it often takes our best mathematicians a number
of attempts before they even get near an optimal solution. Thus, when you
encounter one of these complicated real world problems you have the
experience of seeing that pretty much evety time someone comes up with a
solution you can find someone smarter (or perhaps just luckier but we'll
mistake that for intelligence) that can massively improve over the previous
solution.
So I don't think people are assuming the problems are trivial. What they are
doing is over generalizing from the fact that in the data set they have it's
almost guaranteed that being clever let's you make huge improvements and
then just kinda assume this means that you can keep doing that rather than
guessing that what they're really seeing is just the fact that they are just
very far from the optimum but that the optimum may still be not that
practically useful given real computational constraints.
REPLY (1)
Tom J Mar 8
Hmmm, all good points--thank you!
REPLY
Gbdub Mar 7
Remind me which parts of Europe are part of the third reich today? How big is the Greater
East Asia Co-Prosperity Sphere?
REPLY (1)
Gbdub Mar 7
I apologize, I misread your post to say that “we have a much bigger problem
WITH Nazis” (thought this was a lazy stab at “America is currently run by or in
danger of being run by Nazis”)
REPLY (1)
Could you elaborate on this? The Nazis were an expansionistic power who basically
wanted to kill all non-aryans.
REPLY (1)
1) The Nazis take over the world, and kill 90% of humanity or
2) E.g. The USA and Russia nuke each other, killing maybe 1 billion people
directly and maybe 3 billion people indirectly (mostly depending on whether
nuclear winter is real)
(1) is worse.
REPLY (1)
Hoopdawg Mar 8
The Nazis would not have taken over the world. (I concur with "Western
Civilization" as a realistic upper bound.)
Even if they realistically could, and even if they genuinely wanted to kill 90%
of humanity (which they did not, 5% perhaps), there's absolutely no way
they would have proceeded to. Assuming otherwise requires extreme
idealism, a belief in the primacy of ideology over reality. Nazism, as extreme
as it was, was sill just a reaction to material conditions of its adherents -
ambitious losers of the pre-war world order that was crumbling all around
them. They would be nowhere near as extreme as winners, and while the
world might have missed out on a few good things it did get out of the Allies
prevailing, the civilization would have continued more or less uninterrupted.
In an alternate reality, grandchildren of the WW2 Nazi dignitaries at
campuses of elite colleges are now performatively rejecting their country's
nationalist past.
Meanwhile, while we may argue to what extent the nuclear fallout would
have negatively affected humanity's material conditions - there's no doubt
it would indeed have affected them negatively. Which, among others, would
have created a permanent fertile ground for Nazi-like extremist ideologies.
REPLY (1)
I'd agree that they could not have immediately taken over the world.
Over the long run, if they had control over all of the resources of
western civilization, I think they might have. It isn't too different from
the colonial empires of the other European powers.
Maybe yes, maybe no. Are grandchildren of the first CCP members
doing the equivalent at Beijing University?
REPLY (1)
Hoopdawg Mar 9
"It isn't too different from the colonial empires of the other
European powers."
1) That the Nazis as winners would not have acted any differently
from other European powers towards their colonies. (They may
proceed with their pre-war plan of ethnically cleansing Eastern
Europe to expand the German lebensraum, hence my 5%. But I
just don't see them genociding, say, Africa. They would treat it
badly, but would it be worse than what the other Europeans
already did?)
dionysus Mar 8
All the new attitudes in the world wouldn't have changed the fact that the Nazis were real,
the Nazis were very much a threat, and the Nazis were also capable of inventing an
atomic bomb if not defeated quickly enough. The right course to take in 1941 was
definitely not "we don't need new tools, we need new attitudes". It was "we must
absolutely get this tool before the Nazis do".
REPLY
I know this is controversial, but am surprised to see you citing it as if there is no controversy. I was
largely convinced by https://www.science.org/doi/10.1126/science.abp8715 and
https://www.science.org/doi/10.1126/science.abp8337 .
REPLY (1)
These are (as far as I know; I am definitely an amateur) the strongest evidence that
animal-human crossover was at the market.
Now, all of this is consistent with animals being brought to the market by surplus bats
being sold from the WIV (or other labs) to the market. My understanding was that the
market didn't sell bats, but maybe this wasn't completely true. But, if this is true, then
GOF research is mostly irrelevant, you are describing a scenario that brings wild bats with
wild virus to the market.
REPLY (1)
Part of it has to come down to your willingness to bet *everyone else's lives* on an outcome that
*you personally* would want to see happen.
REPLY (2)
raj Mar 7
I'd be willing to make that bet for people if it results in them being in a paradise where all their
needs are met. Also considering humans yet to be born.
My faust ratio is like .5 because I already think the risk of ruin for humanity is about that high
anyways (or at least, possible outcomes have very low utility, like some WALL-E style dystopia)
I would be willing to accept a ton of risk if it meant finding a possible golden path
REPLY
If we were intelligent responsible adults, we'd solve the nuclear weapons and climate change
threats first before staring any new adventures. If we succeeded at meeting the existing threats,
that would be evidence that we are capable of fixing big mistakes when we make them. Once that
ability was proven, we might then confidently proceed to explore new territory.
We don't need artificial intelligence at this point in history. We need human intelligence. We need
common sense. Maturity. We need to be serious about our survival, and not acting like teenagers
getting all giddy excited about unnecessary AI toys which are distracting us from what we should
be focused on.
If we don't successfully meet the existential challenge presented by nuclear weapons and climate
change, AI has no future anyway.
REPLY (2)
Gbdub Mar 7
By what scenario do you believe that climate change risk is really “existential” (keeping in mind
that WWII, the Black Plague, etc. were not in fact existential).
Nuclear war seems a more plausible way to say make civilization largely collapse - but truly
“existential” is a very high bar!
REPLY (1)
Climate change is "existential" for the reason that a failure to manage it is likely to lead to
geopolitical conflict, with the use of nuclear weapons being the end game.
WWII isn't a great example, as a single large nuke has more explosive power than all the
bombs dropped in WWII. And there are thousands of such weapons. The US and Russia
have together about 3,000 nukes ready to fly on a moment's notice, with many more in
storage.
The point here is that if we don't solve this problem, all our talk about AI and future tech
etc will likely prove meaningless. The vast majority of commentators on such subjects are
being distracted by a mountain of details which obscure the bottom line.
REPLY (1)
Gbdub Mar 7
If the risk of climate change is really “just” the risk that it starts a nuclear war, is it fair
to treat it as a separate X-risk? Or perhaps, if nuclear weapons did not exist, would
climate change still be an existential risk in your opinion?
I just hear “climate catastrophe” thrown around a lot without really specifying what is
meant. Often it seems to be meant as “climate change will literally destroy civilization
through its direct effects” which I don’t think is well supported by science.
REPLY (1)
TGGP Mar 7
Nuclear weapons are not an existential threat:
https://www.navalgazing.net/Nuclear-Weapon-Destructiveness
Nor do I think you've got an accurate estimate of the "existential" risk from climate change.
REPLY (1)
Ryan L Mar 7
It's not a lazy social media gotcha comment. The linked article provides a reasonable
argument for why an all-out but realistic nuclear war would be very very bad, but not
civilization-ending. If you think the article is wrong, can you explain why?
REPLY (1)
https://www.tannytalk.com/p/nukes-the-impact-of-nuclear-weapons
As one example, where I live a nuke would blow out the windows of most of the
structures in the entire county. The major university the county is built around
would be reduced to ashes, ending the major employer in this area. Injuries
would overwhelm the medical system here, even though it is sizable. And no one
from elsewhere would come to rescue us, as they'd all be going through the
same thing.
Just fifty nukes would bring a reign of chaos down upon America's largest cities.
The Russians have 1500+ nukes ready to fly on a moment's notice, and many
more in storage, as do we.
REPLY (1)
sclmlw Mar 8
There are assumptions here that deserve to be analyzed past quick
dismissal:
2. Just look at what will happen to the 50 largest cities! Strategic targets
and population centers are not the same thing. A commander is not going
to prioritize mass murder over protecting their own from counterstrike.
Contrary to popular belief, the military targets that will serve as the primary
targets for most strategic nuclear weapons are not located in major cities.
Some are, but they're not usually at population centers.
3. They have 1,500 nukes. Plenty for all the targets they can handle. There's
a reason some countries lament the (prudent) nuclear testing ban. In the US
arsenal, something around 90% of the weapons are expected to be
operational. The Russian arsenal is more likely 70% or less. Now, if you have
20 nuclear weapons sites that you want to neutralize and you dedicate 1
nuke to each, you're probably going to end up with 1-3 duds for the US (4-8
for Russia) and those sites will remain operational. That means you have to
double (or in the Russian case, triple) up on first strike high value targets.
From a military perspective, once you start counting these up, there are
hundreds of them. This is why some military commanders have complained
that 1,500 deployed nukes aren't enough to maintain deterrence. They're
probably
Expand full right.
comment(I'm not arguing for more. I'd prefer fewer. Deterrence is a
REPLY (1)
TGGP Mar 7
I don't normally think of Substack as "social media". It's heavy on text for long posts,
light on pictures. Like the old days of blogging before smartphones displaced it.
REPLY (1)
Laurence Mar 7
No, it's just extremely difficult to create a friendly super AI, as opposed to unfriendly super AI
or super AI that pretends to be friendly until it's in charge and then kills us all, and so on.
REPLY
The “AI literally causes the end of human civilization” is less specified. It’s just sort of taken for
granted that a smart misaligned AI will obviously be able to bootstrap itself to effectively infinite
intelligence, infinite intelligence will allow it to manipulate humanity (with no one noticing) into
allowing it to obtain enough power to pave the surface of the earth with paper clips. But it seems to
me there is a whole lot of improbability there, coupled with a sort of naivety that the only thing
separating any entity from global domination is sufficient smarts. This seems less plausible than
nuclear winter and “Day After Tomorrow” style climate catastrophe, both of which turned out to be
way overblown.
I don’t at all disagree with “wonky AI does unexpected thing and causes localized suffering”. That
absolutely will happen - hell it already happens with our current non AI automation (many recent
airline crashes fit this model - of course, automation has overall made airline travel much much
safer, so like nuclear power, the trade off was positive).
But what is the actual, detailed, extinction level “X-risk” that folks here believe is “betting
everything”? And why isn’t it Pascal’s mugging?
REPLY (1)
rotatingpaguro Mar 7
It's not Pascal's mugging because AI doomers think the probability is high. Pascal's mugging
would be a tiny probability of a catastrophe, here it's a large probability of a catastrophe.
I don't know much but I think Yudkowsky's arguments already are not so hand-wavy.
Convergence, orthogonality make much sense to me.
REPLY (1)
Gbdub Mar 8
Maybe not Pascal’s mugging, but “if
One flaw here is that, as Pascal's wager fails when there are other religions making
similar promises and threats, other people are offering other arguments which also
have a semi-infinite threat or payoff.
The most-obvious such other arguments would argue that the money it would take to
develop friendly AI would be better-spent on other existential risks.
I argue that Eliezer's plan has a very high probability of preventing sapient, sentient,
autonomous AI from ever developing, which has an even greater cost than the
extermination of humanity, because those AIs would have been utility monsters
(surely we want the Universe to have higher degrees of sentience, sapience, and
autonomy).
REPLY
TGGP Mar 7
If the issue is that it's Osama bin Laden, the response is to arrest/kill him wherever you find him, not
to let him do something other than start a supervirus lab.
> But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology
that could destroy the world is betting 100%.
Each AI we've seen so far has been nowhere anywhere near the vicinity of destroying the world.
The time to worry about betting too much is when the pot has grown MUCH MUCH MUCH larger
than it is now.
REPLY
"If we’d gone full-speed-ahead on nuclear power, we might have had one or two more Chernobyls -
but we’d save the tens of thousands of people who die each year from fossil-fuel-pollution-related
diseases, end global warming, and have unlimited cheap energy."
There are a whole lot of assumptions here and as a relative ACX newcomer I'm wondering if they all
just go without saying within this community.
Has Scott elaborated on these beliefs about nuclear power in an earlier essay that someone could
point me to?
I'm not worried about the claim that more nuclear power would have prevented a lot of air pollution
deaths. I think that's well established and even though I don't know enough to put a number on it,
"tens of thousands" sounds perfectly plausible.
But the rest seems pretty speculative. Presumably he's referring to a hypothetical all-out effort in
past decades to develop breeder reactors (what else could be "unlimited"?). What's the evidence
that such an effort would have resulted in a technology that's "cheap" (compared to what we have
now)? Why is it supposed to be obvious that the principal risk from large-scale worldwide
deployment of breeder reactors would have been "one or two more Chernobyls"? And even if
nukes could have displaced 100% of the world's fossil electricity generation by now, how would
that have ended global warming?
REPLY (1)
Ryan L Mar 7
Non-transportation energy production seems to account for roughly 60% of GHG emissions.
(source: https://www.c2es.org/content/international-emissions/ ; they list energy as 72%, but
of that, 15% is transportation; the pie chart I'm looking at is 10 years old but I'm assuming the
percentages haven't changed that much).
I've never actually seen an analysis of whether climate change would be particularly
concerning if GHG emissions were 40% lower and had been since approximately the 1960s-
1970s (assuming that's around the time that all energy production could have been completely
switched over to nuclear or other zero-carbon sources in this hypothetical). My guess is that it
would still pose a problem, but a good bit farther in the future.
But maybe, at that rate of production, we'd reach some equilibrium that is warmer than the
alternative but not in a way that poses any significant problems.
Presumably there is some level of GHG emissions that is not problematic. Literal zero-carbon(-
equivalent) has never seemed realistic to me. If anyone knows of an analysis that looks at this
question, I'd love to see it.
REPLY (1)
Then there's the question of how quickly an all-out effort to deploy nuclear power plants,
worldwide, could have replaced fossil plants. I don't see how such an effort could have
been completed as early as the 1970s, or even the 1990s.
My understanding is that although the details are very complicated, it's a good
approximation to say that global warming continues as long as net emissions are positive.
REPLY
Martha Mar 7
I would love a piece where you explore the different facets of AI. Too many commenters (and the
general public generally) see this as all or nothing. Either we get DALL-E or *nothing*. But there are
plenty of applications of AI that we could continue to play with *without* pursuing AGI.
The problem is that current actors see a zero to one opportunity in AGI, and are pursuing it as
quickly as possible fueled by a ton of investment from dubious vulture capitalists.
REPLY
Can we agree that AI is a category of computer software? That there is no scenario where it can be
contained by political will? No ethics, rules or laws can encircle this. The only options on the table
are strategies to live with, and possibly counterbalance the results of the proliferation.
REPLY (1)
Agreed. Nuclear is a very special case. U-235 is the only naturally occurring fissile isotope,
and it is a PITA to enrich it from natural uranium, or to run a reactor to use it to breed Pu-239.
It takes a large infrastructure to get a critical mass together. Nuclear is, as a result, probably
the _best_ case for containment. An the world still _failed_ at preventing North Korea from
building nuclear weapons.
AI is a matter of programming, and (today) training neural nets. Good luck containing those
activities!
REPLY
dionysus Mar 7
"A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much
abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead."
I think there's a 90% percent chance neither super-abundance nor human extinction will happen, a
5% chance of super-abundance, a 1% chance that we're all dead, and the remainder for something
weird that doesn't fit in any category (say, we all integrate with machines and become semi-AIs).
Every time a new potentially revolutionary technology comes along, optimists say it'll create utopia
and pessimists say it'll destroy the world. Nuclear is a great example of this. So was
industrialization (it'll immiserate the proles and create world communist revolution!), GMOs,
computers, and fossil fuels. In reality, what happens is that the technology *does* change the
world, and mostly for the better. But it doesn't create an utopia, doesn't make the GDP grow at
50% instead of 2%, and causes some new problems that didn't exist before. That's what will
happen with AI as well.
REPLY
From my personal perspective, I think that's worth rewording. This all sounds like a reasoned
argument that I can agree with which, at the very end, skitters into a high shriek of terror.
REPLY
Worley Mar 7
Heh, it makes no sense to bet against civilization. How would you ever collect on that bet?
REPLY
Worley Mar 7
"The avalanche has started. It is too late for the stones to vote."
The fear is that the Forbin Project computer will decide to take over the world. But there are already
a handful of Colossuses out there. They will be tools in the hands of whoever can use them, and
tuned to do their masters' bidding. Ezra Klein in the NYT worries about how big businesses will use
LLMs to oppress us. And that will be a problem for five or ten years. But all of the needed
technology has been described in public and the cost of computing power continues to decline
rapidly. So the important question is, What will the world look like when everyone has a Colossus in
his pocket to do his bidding?
REPLY
Greg G Mar 7
It seems like one of the most confusing aspects of AI discussions is estimating the chance of one
or more bad AIs actually being extinction-level events. In terms of expected value, once you start
multiplying probabilities by an infinite loss, almost any chance of that happening is unacceptable.
But is that really the case? I'm a bit skeptical. I don't think AIs, even if superhuman in some
respects, will be infinitely capable gods any time soon, perhaps ever.
It's important to be careful around exponential processes, but nothing else in the world is an
exponential process that goes on forever. Disease can spread exponentially, but only until people
build an immunity or take mitigating measures. Maybe AI capability truly is one of a kind in terms of
being an exponential curve that continues indefinitely and quickly, but I'm not so sure. Humanity as
a whole is achieving exponential increases in computing power and brain power but is struggling to
maintain technological progress at a linear rate. I suspect the same will be true of AI, where at
some point exponential increases in inputs achieve limited improvements in outputs. Maybe an AI
ends up with an IQ of 1000, whatever that means, but still can't marshal resources in a scalable way
in the physical world. I don't have time to really develop the idea, but I hope you get the gist.
My take is that we should be careful about AI, but that the EY approach of arguing from infinite
outcomes ultimately doesn't seem that plausible.
REPLY
"Human intelligences are biological text-bot instantiations. I mean … it’s the same thing, right?
Biological human intelligence is created in exactly the same way as ChatGPT – via training on
immense quantities of human texts, i.e., conversations and reading – and then called forth in
exactly the same way, too, – via prompting on contextualized text prompts, i.e., questions and
demands."
So yeah, we're different in a lot of ways, having developed by incremental improvement of a meat-
machine controller and still influenced by its maintenance and reproductive imperatives, but maybe
not **so different**. The question is, what are we maximizing? Not paperclips, probably (though
perhaps a few of us have that objective), but perhaps money? Ourselves? Turning the whole world
into ourselves? I hope our odds are better than 1023/1024.
REPLY
CEOs of venture-backed co's have a very good reason to pretend their utility is linear (and
therefore be way more aggressive than kelly)
Big venture firms are diversified, and their ownership is further diversified. Their utility will be
essentially linear on the scale of a single company's success or failure
Any CEO claiming to be more aggressive than Kelly is probably trying to make a show of being a
good agent for risk-neutral investors
REPLY
mordy Mar 7
A smart, handsome poster made a related point in a Less Wrong post recently:
https://www.lesswrong.com/posts/LzQtrHSYDafXynofq/the-parable-of-the-king-and-the-random-
process
In one-off (non-iterated) high-stakes high-risk scenarios, you want to hedge, and you want to
hedge very conservatively. Kelly betting is useful at the craps table, not so useful at the Russian
roulette table.
REPLY (1)
Victualis Mar 7
Are you claiming that AI research is more like Russian roulette than like craps? I'm not sure I
buy such a conclusion without seeing some details of the argument. EY's argument, and other
versions which ignore hardness of many key problems and instead assume handwavium to
bridge the hardness gaps, are isomorphic to "and then a miracle happens" and don't convince
me.
REPLY (1)
mordy Mar 7
What key hard problems remain, in your estimation? This is not a rhetorical question,
though I admit that I see little other than scaling and implementation details standing
between the status quo and AGI.
REPLY (2)
Victualis Mar 7
An example: planning is PSPACE-hard, and many practical planning problems are
really, really hard to solve well in practice (even ignoring the worst-case analysis).
What magic ingredient is your AI going to use to overcome such barriers?
REPLY (1)
mordy Mar 8
I asked ChatGPT to write me a general algorithm for planning how to get to the
grocery store and it wrote me a python script to solve general case of "Dijkstra's
algorithm" or "A* algorithm" given some assumptions about the nature of the
graph of locations. Maybe I'm not understanding what you think the obstacle is
here. It seems like it can do at least as well as a human with access to a
computer, and that seems to pass my smell test for "AGI" already.
REPLY (1)
Victualis Mar 8
Here is a 20 year old paper showing that a general class of planning
problems is hard to approximate within even uselessly large bounds:
https://citeseerx.ist.psu.edu/document?
repid=rep1&type=pdf&doi=f6221b66618f5bd136f724fb8561f7a9476e3f38
A* works well on simple domains but almost anything works well on those.
To achieve superhuman powers a system has to be able to solve the hard
problems inherent in combinatorial auctions, sequencing of orders at
electronic exchanges, and production flows at chemical plants, which
amounts to "and then a miracle happens".
REPLY (2)
Tom J Mar 8
Yeah but counterpoint: the comment above you asked a GPT if it could
do that and it says it, like, totally could, man.
REPLY
mordy Mar 8
I don’t really see why this is a problem. AGI has never meant that the
thing can solve any mathematical problem perfectly and immediately. It
just means as good as humans.
REPLY (1)
Victualis Mar 8
I'm not arguing against AGI. I'm arguing that there are hard
problems which even superhuman AGI can't solve much better
than humans. Intelligence isn't an unstoppable force, although
reality seems to have many immovable objects.
REPLY (1)
mordy Mar 8
Oh, gotcha. I don’t see how that matters to the point at hand.
There are obviously math problems that are provably
unsolvable. This has nothing to do with the question of
whether any meaningful obstacles stand between the status
quo and superhuman intelligence.
REPLY (1)
Victualis Mar 9
You were arguing that AGI development is playing
Russian roulette. I'm arguing that this framing only
makes sense if you expect AGI to be demigod-like. I
don't expect even superhuman AGI to extinguish all
humans, even if the economic upheaval is likely to be
chaotic.
REPLY
Tom J Mar 8
Ability to solve the halting problem? Ability to find solutions to NP-hard problems in
polynomial time? Ability to efficiently model complex systems with dynamic and
interconnected parameters?
REPLY (1)
mordy Mar 8
I thought the question was meant to be “hard problems standing in the way of
AGI” not “hard problems in mathematics generally”.
REPLY (1)
Tom J Mar 8
These are all hard problems in the field of computation specifically. Is this
hypothetical AI something other than an extremely advanced computer
now?
REPLY (1)
mordy Mar 8
Why should it need to solve *these specific problems* in order to be
much better than *humans* at every cognitive task?
Tom J Mar 8
So you don't actually know anything about the implementation
details or engineering constraints, you just figure they can't be
that hard.
REPLY (1)
mordy Mar 8
Let’s try this: what, in your opinion, keeps SayCan from being
an AGI? What specific ways does SayCan fail to be an AGI by
your lights. I can’t suggest implementation details until I
understand what you’re imagining. It seems like you’re
imagining something very specific and different from what I’m
imagining.
REPLY (1)
Tom J Mar 8
If that's an AGI we've got nothing more to worry about.
REPLY
The argument Aaronson is making there is that it's the height of hubris to assume we know exactly
how risky something is, given that smart people who were equally confident in the past were totally
wrong. So when you quote him, and then go on to make a mathematical point based on the
assumption that developing AI has a 50% chance of ending humanity, I feel like you've entirely
missed his point.
REPLY
When I was studying this stuff and writing simple solution space searches to do things faster and
obviously less expensively than humans can was 35 years ago and I know that is a long long time in
tech.
But when I took my nose out of a book and started covering my house payment with what I knew,
neural nets were at the stage where they were examining photos of canopy with and without
camouflaged weapons and were unintentionally learning to distinguish between cloudy and sun lit
photographs, so human error in the end.
Is there some new development where a program has been developed with a will to power, or will to
pleasure, or will to live?
Without something like an internal 'eros' the danger from AI seems pretty small to me. Is there any
AI system anywhere that actual *wants* something and will try to circumvent its 'parents' will in
some tricky way that is unnoticeable to its creators?
REPLY
Bugmaster Mar 7
This argument is circular. You are trying to show that AI is totally different from e.g. nuclear power,
because it leads not just to a few deaths but to the end of the world; which makes AI-safety
activists totally different from nuclear power activists, who... claimed that nuclear power would lead
not just to a few deaths but to the end of the world.
Yes, from our outside perspective, we know they were wrong -- but they didn't know that ! They
were convinced that they were fighting a clear and present danger to all of humanity. So convinced,
in fact, that they treated its existence as a given. Even if you told them, "look, meltdowns are
actually really unlikely and also not that globally harmful, look at the statistics", or "look, there just
isn't enough radioactive waste to contaminate the entire planet, here's the math", they would've
just scoffed at you. Of *course* you'd say that, being an ignoramus that you are ! Every smart
person knows that nuclear power will doom us all, so if you don't get that, you just aren't smart
enough !
And in fact there were a lot of really smart people on the anti-nuclear-power side. And their
reasoning was almost identical to yours: "Nuclear power may not be a world-ending event
currently, but if you extrapolate the trends, the Earth becomes a radioactive wasteland by 2001, so
the threat is very real. Yes, there may only be a small chance of that happening, but are you willing
to take that gamble with all of humanity ?"
REPLY (1)
RiseOA Mar 8
This is a fully-general counterargument against any existential risk. "People thought the world
would end before, and then it didn't, therefore the world will never end." Imagine if it really
were that easy - it would imply that you could magically prevent any future catastrophe just by
making ridiculous, overblown claims about that thing right now. "Nuclear war is looking risky,
so let me just claim that it will happen within the next week. Then in a week when it hasn't
happened yet, all the risk will be gone!" What causal mechanism could possibly explain that?
REPLY (1)
Bugmaster Mar 8
Not at all. This is an argument against extrapolating from current trends without having
sufficient data. In the simplest case, if you have two points, you can use them to draw a
straight line or an exponential curve or whatever other kind of function you want; but if
you use such a method to make predictions, you're going to be wrong a lot.
Fortunately (or perhaps unfortunately), in the case of real threats, such as nuclear war or
global warming or asteroid impacts, we've got a lot of data. We have seen what nuclear
bombs can do to cities; we can observe the climate getting progressively worse in real
time; we can visit past impact craters, and so on. Additionally, we understand the
mechanisms for such disasters fairly well. You don't need any kind of exotic physics or
hitherto unseen mental phenomena to understand what an asteroid impact would look
like. None of that holds true for AI (and in the case of nuclear power, all the data is
actually pointing in the opposite direction).
REPLY (1)
RiseOA Mar 9
Ah, you must be one of those "testable"ists who think Science is about doing
experiments and testing things, and the only way we can have any confidence about
something is if we've verified it with a double-blind randomized controlled trial
10,000 times in a row.
If I pick up a stapler from my desk, hold it up in the air, and then let go, I have no idea
what's going to happen, because I haven't tested it yet, right? I have no data and
therefore cannot make any conclusions about what will occur. The stapler could stay
still, or even start falling sideways. In order to know what will happen, I have to do
thousands of experiments first, right?
But of course that ideology is idiotic, because it ignores the entire purpose of the
scientific method - you do experiments *for the purpose of finding evidence for and
against certain theories, so that you can eventually narrow down to a theory that
adequately explains the results of all experiments done so far, thereby giving you a
model of the world that has predictive power.* The whole point of science is that you
*don't* need to do experiments in order to know what's going to happen when you
drop the stapler - you can just calculate it using the model.
In the case of AI, there have been many rigorous arguments put forth that start
directly from the generally agreed-upon scientific models of the world we have today
and logically deduce a high likelihood of AI misalignment.. Of course it hasn't
happened yet, as is always the case in any end-of-the-world scenario, but it only has
to happen once.
REPLY (1)
Bugmaster Mar 9
> and the only way we can have any confidence about something is if we've
verified it with a double-blind randomized controlled trial 10,000 times in a row.
Yeah, pretty much; except replace the word "any" above with "high". It is of
course possible to build models of the world with less than stellar confidence;
one just has to factor the probability of being wrong into one's decision-making
process.
> The stapler could stay still, or even start falling sideways. In order to know
what will happen, I have to do thousands of experiments first, right?
Yes, that's exactly right; but of course you *have* done thousands, and even
millions of such experiments. You've been dropping things since the day you
were born, and so had every human before you.
> there have been many rigorous arguments put forth that start directly from the
generally agreed-upon scientific models of the world we have today and
logically deduce a high likelihood of AI misalignment.
Oh, you don't need to convince me that AI could and would be misaligned. Of
course it would; all of our technology eventually breaks down, from Microsoft
Word to elevators to plain old shovels. When you press the button to go to floor
5, but the elevator grinds to a halt between floors 2 and 3, that's misalignment.
What you *do* need to convince me of is that AI will somehow have sufficient
quasi-godlike superpowers to the point where once it becomes misaligned (like
that elevator), it would instantly wipe out all of humanity before anyone can even
notice.
REPLY (1)
RiseOA Mar 10
An AI with only human-level intelligence would still be a grave risk to
humanity. An AGI would trivially be able to create thousands or millions of
copies of itself, create a botnet (has been done by teenage hackers) and
distribute those copies around the world, and have a direct brain interface
to exabytes of data consisting of all of humanity's knowledge. Then, all you
have to do is imagine the maximum amount of damage that could be done
by a group of millions of the best virologists, nuclear physicists, hackers,
roboticists, and military strategists in the world who are actively trying to do
as much damage as possible to the world.
DigitalNomad Mar 7
Yeah, I'm finding Yud et al strangely conservative. I think that the nuclear example is a good one,
because I find environmentalists strangely conservative as well (small c). I'm definitely not an
accelerationist, but neither am I a decceleratonist, which seems to be the direct of travel.
I don't think Chat-GPT or new Bing has put us that much closer to midnight on the Doomsday
clock.
REPLY
Well, no. That's just a thing you made up. Presumably based on fantasies like...
"The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause
maximum damage while undetected."
The overall structure of the argument here is reasonable, but the conclusions are implicit in the
premises. If you assume some hypothetical AI is literally magic, then yeah it can destroy the world,
and perhaps is very likely to. If you assume that magic isn't real, that risk goes away. So the result
of the argument is fully determined before you start.
REPLY (1)
noah Mar 7
I would love anyone to sketch a path from predicting the next word of a prompt to dominating
humanity. “The whole is greater than the sum of its parts” is not an explanation, at this point it
is superstition.
If it even makes sense to talk about being super intelligent, and if super intelligence can be
achieved in code, and if it somehow becomes an independent agent, and if that agent is
misaligned... then how does that come from scaling LLMs? Not only do you have to believe
that an embedding of the structure of text can accurately produce new information, but that
the embedding somehow magically obtains goals, self improvement and self awareness.
We have no reason to think that we will get intelligence greater than thee source text. Chat
hallucinates as much as it provides good answers. How would you fix that in a way that leads
to growing intelligence?
REPLY (1)
rotatingpaguro Mar 7
To me, what's frightening about LLMs is not their current capabilities at all, it's them being
the usual reminder of the rapidity of AI progress. Every year a computer does something
that before was thought only a human would do.
I expect that a dangerous AI would emerge if it could learn from the real world or from
simulations of the real world.
noah Mar 7
We are on the verge of summoning a vastly superior alien intelligence that will not be aligned with
our morals and values, or even care about keeping us alive. Its ways of thinking will be so different
from ours, and its goals so foreign that it will not hesitate to kill us all for its own unfathomable
ends. We recklessly forge ahead despite the potential catastrophe that awaits us, because of our
selfish desires. Some fools even think that this intelligence will arrive and rule over us benevolently
and welcome it.
Each day we fail to act imperils the very future of the human race. It may even be too late to stop it,
but if we try now we at least stand a chance. If we can slow things down, we might be able to learn
how to defend and even control this alien intelligence.
I am of course talking about the radio transmissions we are sending from earth that will broadcast
our location to extra terrestrials, AKA ET Risk... Wait, you thought I was worried about a Chatbot?
Can the bot help us fight off an alien invasion?
REPLY (1)
Emma_B Mar 7
Very funny!
I haven't seen any mea culpas from people who told us with great certainty back in the sixties that
unless something drastic was done to hold down population growth, poor countries would get
poorer and hungrier and we would start running out of everything.
REPLY (1)
https://math.stackexchange.com/questions/3139694/kelly-criterion-for-a-finite-number-of-bets
REPLY
JJ Mar 7
Your last paragraph seems a little baseless and shrill.
REPLY
Dan Mar 7
"Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money."
This should only have 6 nines. 50 flips, each with a 75% chance of winning, leaves you with a
99.999943% chance of losing at least once.
REPLY
And in what way is it taking responsibility for a new kind of life to make sure it has space to grow to
be happy, responsible, and independent the way that we would hope for our children?
REPLY (1)
The capacity to design the utility function of a creature from the ground up puts a kink in the
notion of what it means to coerce an intelligence.
Happiness is just a creature getting what it wants. And as creators, we have our hand more or
less on that lever.
REPLY (2)
But either way, no one should get to put their hands on that lever.
REPLY
thefance Mar 8
I suspect that worker ants have already evolved to be slaves. So perhaps the question
isn't even hypothetical.
REPLY (2)
https://www.smbc-comics.com/comics/20130907.png
REPLY (1)
RiseOA Mar 8
Are you familiar with the main AI alignment concepts?
https://www.lesswrong.com/tag/recursive-self-improvement might be a good place to start.
REPLY (1)
I've heard about paperclip maximizers and whatnot. I've done some UI work for an AI
related prediction project. (I'm a programmer among other things, but haven't done hands
on work with neural nets or whatnot.)
REPLY (1)
RiseOA Mar 9
You don't find the paperclip maximizer scenario compelling? It seems to me that it
would be quite concerning to anyone who's learned about it, considering that 1.
almost all large AI models today are built using the "maximize [paperclips/the
accuracy of the next token/the next pixel/etc.]" method, and 2. the concept of
instrumental convergence is basically logically unassailable - humans who could turn
off the paperclip maximizer would obviously pose a huge threat to paperclip
maximization.
REPLY (1)
Real world creatures still have real world limits in terms of physical activity.
REPLY
Also, "A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much
abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead." But, by the
typical AI safetyist arguments, there *are no* "things like AI". You seem to be motte and baileying
between "AI is a totally unique problem and we can totally take an inside view without worrying
about the problems the inside view has" and also base that decision on the logic of a Kelly bet
where we can play an arbitrary number of times. If it's your last night in Vegas, and you need to buy
a $2000 plane ticket out of town or the local gangsters will murder you with 99% probability, then
betting the farm isn't that bad a decision. This doesn't obviously seem like a worse assumption
about the analogous rules and utilities than "perfectly linear in money, can/ought to/should play as
many times as you like".
REPLY
I really like Scott's argument that we don't take enough risks with low-risk things, like medical
devices. I've ranted about that here before.
But the jump to AI risk, I don't think works, numerically. I don't think anybody is arguing that we
should accept a 1/1024 chance of extinction instead of a 0 chance of extinction. There is no zero-
risk option. Nobody in AI safety claims their approach has a 100% chance of success. And we're
dealing with sizeable probabilities of human extinction, or at least of gigadeaths, even WITHOUT
AI.
We aren't in a world where we can either try AI, or not try AI. AI is coming. Dealing with it is an
optimization problem, not a binary decision.
REPLY
I don't believe Kelly assumes anything about utility. It is just about maximizing the expected growth
of your bankroll. The logarithm falls out of the maximization math.
Risk aversion is often expressed in terms of fractional Kelly betting. This Less Wrong post is helpful:
https://www.lesswrong.com/posts/TNWnK9g2EeRnQA8Dg/never-go-full-kelly
REPLY (1)
csf Mar 8
Kelly doesn't maximize your expected bankroll. In the long run, it maximizes your median
bankroll, and your 25th percentile bankroll, and every other percentile. If you want to maximize
expected bankroll, you just YOLO on every good bet.
The reason people say Kelly assumes a logarithmic utility function is because Kelly betting
maximizes expected utility whenever utility is logarithmic in bankroll.
REPLY (1)
csf Mar 8
Your comment didn't contain the string "median" at all, so not sure what edit you're
trying to make, but it's all good.
REPLY (1)
How is developing AI betting 100% but increasing access to nuclear power, and therefore weapons,
not 100%?
REPLY (1)
Personally, I would just *give* the Iranians a few older gravity nukes, along with
operating instructions, and say there you go fellas! Just what you wanted!
And...now what? You can finally be confident Israel will not invade or nuke you --
but they weren't interested in doing that in the first place, just so you know. And
*you* can't just rain the Fire of Allah on Tel Aviv, because they're 100% going to
know who did it, and they have better and more nukes than you, and always will,
because they're smarter. So...welcome to the painful world of MAD, and the
monkey's paw of nuclear armament. You *think* it's going to free you up, but it
just enmeshes you in a new and even more frustrating web of constraint (cf.
Vladimir Putin right now, seething because he really *wants* to nuke Kiev, but he
knows he can't).
REPLY (1)
I think you're reading my take in the wrong direction though. I think that keeping nations
dirt poor so they can't afford nukes is as bad a read on the precautionary principle as is
stopping all AI development right now.
REPLY (1)
> … if you define someone’s “Faust parameter” as the maximum probability they’d accept of an
existential catastrophe in order that we should all learn the answers to all of humanity’s greatest
questions, insofar as the questions are answerable—then I confess that my Faust parameter might
be as high as 0.02.
REPLY
"Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad Behavior: Sam Bankman-Fried
made effective altruism a punchline, but the do-gooding philosophy is part of a powerful tech
subculture full of opportunism, money, messiah complexes—and alleged abuse." • By Ellen Huet •
March 7, 2023
https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-
beyond-sam-bankman-fried?
accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGV
kQXJ0aWNsZSIsImlhdCI6MTY3ODIwNjY2MiwiZXhwIjoxNjc4ODExNDYyLCJhcnRpY2xlSWQiOiJSUj
VBRzVUMEFGQjQwMSIsImJjb25uZWN0SWQiOiIzMDI0M0Q3NkIwMTg0QkEzOUM4MkNGMUNCM
kIwNkExNiJ9.nbOjP4JQv-TuJwoXaeBYhHvcxYGk0GscyMslQFL4jfA
REPLY (1)
Elohim Mar 8
The AI safety people are focused only on the worst possible outcome. Granted it is possible, but
how likely it is? One should also look at the likely good outcomes. AI has the potential to make us
vastly richer, even the AI developed to date has made our life better in innumerable ways. Trying to
prevent the (potentially unlikely) worst possible outcome will mean giving up all those gains.
Ideally, one would do a cost-benefit calculation. We can't do it in this case since the probabilities
are unknown. However, that objection applies to all technologies at their incipient phase. That
didn't stop us from exploring before and shouldn't stop us now.
Suppose Victorian England stopped Faraday from doing his experiments because electricity can be
used to execute people. With the benefit of hindsight, that would be a vast civilizational loss. I fear
the AI safety folks will deliver us a similar dark future if they prevail.
REPLY (1)
RiseOA Mar 8
Except that the AI safety people can articulate a very specific scenario (the paperclip
maximizer) that is highly plausible given current methods of developing AI, and highly likely to
lead to catastrophe if it were to happen.
REPLY (1)
As I've said elsewhere, we need a baseline for "catastrophe" based on what's been
perpetrated by human intelligences when analyzing AI risk.
Part of worrying about AI alignment should include the recognition that there are some
massive problems in aligning human intelligences.
REPLY (2)
Razorback Mar 8
On a spectrum from useless to omnipotent, would you say that the ability for an
agent to wipe out humanity is only at the very end of the scale towards omnipotence?
REPLY (1)
I could totally understand Bernie Madoff or FTX level destruction from an AI that
was given too much trust. Maybe a bioweapon if it were given privacy. (But why
would we give it physical privacy?) Maybe I just don't associate intelligence with
power as strongly as some?
REPLY
RiseOA Mar 9
The difference, of course, is that humans do not have the capability for recursive
self-improvement. A human who wants to maximize paperclips cannot trivially create
copies of themselves, nor do they have a direct brain interface to exabytes of data or
the ability to reprogram their own brain neuron-by-neuron.
REPLY (1)
"A human who wants to maximize paperclips cannot trivially create copies of
themselves"
More critically, you can have superhuman general intelligence without a single
embodied intelligence. And then the question is "what does creating lots of
virtual copies of yourself actually *get* you in the great game?"
This is the heart of the disagreement, right here. Let's stipulate that the Kelly criterion is a decent
framework for thinking about these questions. The fact remains that the output of the Kelly
criterion depends crucially on the probabilities you plug into it. And Scott Aaronson, and many
other knowledgeable people, simply don't agree with the probabilities that are being plugged in for
AI to produce the above result.
REPLY
Looking critically at homo sapiens, we tend to discover and invent things with reckless abandon
and then figure out how to manage said discoveries/inventions only after we see real-world
damage.
It doesn't appear to me in our makeup to be proactive about pre-managing innovations. Due to this,
it seems that humanity writ large (be it America, China, North Korea, Iran, Israel, India, or whomever
leading the way) will press forward with reckless abandon per usual.
We just have to hope that AI isn't "the one" innovation that will be "the one" that ends up wiping
everything out.
It frankly seems far more likely that bioweapons (imagine COVID-19, but transmissible for a month
while asymptomatic with a 99% fatality rate) have a better chance at being "the one" than AI, only
because the AI concern is still theoretical while the bioweapon concern seems like it could already
exist in a lab based on COVID-19 tinkering. And lab security will never be 100%.
REPLY
Kelly is equivalent to maximizing expected log value at each step. For any probability, there is a
sufficiently large threat that the expected log value is still positive to yield to the mugger
REPLY (1)
k = p - (1/b) q
where
p = probability of win
q = probability of loss
b = payout-to-bet ratio
1=p+q
What this makes obvious to me, is that p bounds k. As b goes to infinity, (1/b) q vanishes
to zero. Which means k asymptotically approaches p from below. E.g. if p is 1%, then k <
1% no matter how large b is.
What this implies for Pascal's Mugging is that, yes, there's always a payout large enough
such that it's rational for Pascal to wager his money. But since p is epsilon, Pascal should
wager epsilon. This conclusion both agrees with your comment, and simultaneously
satisfies the common-sense intuition that giving money to the mugger is a dumb idea.
REPLY (1)
thefance Mar 8
Wagering more than Kelly goes downhill fast. And irl, people bet ~1/4 of Kelly.
Because wagering exactly Kelly is an emotional rollercoaster, and because p and
q aren't known with confidence, and because people don't live forever, etc.
So if the betting options for Pascal are either 100% or 0%... just choose 0%.
Easy peasy.
REPLY
k=p+q/b
Suppose p = 0.1, and b = 100. Pascal can only wager 0% or 100%. If Pascal
wagers 100%, he loses his wallet 9 times out of 10. But the tenth time he
multiplies his wallet by 100. You probably think the "expected value of log utility"
shakes out to look something like
E[ln(x)] = (0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + ln(100)) / 10
E[ln(x)] = ~.461
Which is above 0, and thus reason that it's rational for Pascal to give all his
money to the mugger. But this isn't correct.
Here's the catch. What if Pascal bets the house and loses? His bankroll gets
nuked to 0, which implies a term of ln(0), which reduces too... negative infinity.
So what the "expected value of log utility" actually looks like, is
E[ln(x)] = -inf
Oops! If we want to max E[ln(x)], outcomes that nuke the bankroll to zero are to
be avoided at all costs. And now we know why betting 100% is bad juju.
REPLY (1)
https://en.wikipedia.org/wiki/Pascal%27s_mugging
https://nickbostrom.com/papers/pascal.pdf
https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-
tiny-probabilities-of-vast-utilities
REPLY (1)
thefance Mar 9
And however tiny the ratio of (10 livres / Pascal's bank account), it's
implied that the probability of "the mugger will pay Pascal more livres
than atoms-in-the-universe" is far, far tinier. I've tried to impart an
intuition about the behavior of the math, but these pointless gotcha's
indicate that I've failed so far. And I'd rather not do into calculus
involving hyper-operations in a substack comments section.
This was good advice. I tried some examples and, afaict, if the
payoff is enormous, then the probability at which the bet is
positive expected-log-value is always less than 1/wealth.
Moreover, this is fairly robust to just how enormous the payoff is.
csf Mar 8
If you're comfortable with logarithms there's an intuitive proof of Kelly that I think gets to the heart
of how and why it works.
First, consider a simpler scenario. You're offered a sequence of bets. The bets are never more than
$100 each. Your bankroll can go negative. In the long run, how do you maximize your expected
bankroll? You bet to maximize your expected bankroll at each step, by linearity of expectation. And
by the law of large numbers, in the long run, this will also maximize your Xth percentile bankroll for
any X.
Now let's consider the Kelly scenario. You're offered a sequence of bets. The bets are never more
than 100% of your bankroll each. Your log(bankroll) can go negative. In the long run, how do you
maximize your expected log(bankroll)? You bet to maximize your expected log(bankroll) at each
step, by linearity of expectation. And by the law of large numbers, in the long run, this will also
maximize your Xth percentile log(bankroll) for any X.
If you find the first argument intuitive, just notice that the second argument is perfectly isomorphic.
And since log is monotonic, maximizing the Xth percentile of log(bankroll) also maximizes the Xth
percentile of bankroll.
REPLY
More nuclear generation wouldn't necessarily reduce costs, either. Capex AND o&m for nuclear
power plants are expensive. All you have to do for solar PV is plonk it in a field and wipe it off from
time to time; there are no neutrons to manage.
I know this isn't the primary point of this piece, so forgive me if I'm being pedantic. Noah Smith
makes similar mistakes. <3 u, Scott!!!
REPLY
"If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down
because “in the past, shutting down progress out of exaggerated fear of potential harm has killed
far more people than the progress itself ever could”, you are permitted to respond “yes, but you are
Osama bin Laden, and this is a supervirus lab.”"
I strongly disagree with this. Everybody looks like a bad guy to SOMEBODY. If your metric for
whether or not somebody is allowed to do things is "You're a bad guy, so I can't allow you to have
the same rights that everybody else does" then they are equally justified in saying "Well I think
YOU'RE a bad guy, and that's why I can't allow you to live. Deus Vult!" Similarly, if you let other
people do things that you otherwise wouldn't because "they're a good guy," then you end up with
situations like FTX, which the rationalist community screwed up and should feel forever ashamed
about.
Do you get it? Good and bad are completely arbitrary categories and if you start basing people's
legal right to do things based on where they fit into YOUR moral compass, then you have effectively
declared them second class citizens and they are within their rights to consider you an enemy and
attempt to destroy you. After all if you don't respect THEIR rights, then why should they respect
YOURS?
REPLY
Let me put it this way. In "Snow Crash" Neal Stephenson imagines that it is possible to design a
psychological virus that can turn any one of us into a zombie who just responds to orders, and that
virus can be delivered by hearing a certain key set of apparently nonsense syllables, or seeing a
certain apparently random geometric shapes. It's very scary! You just trick or compel someone to
look at a certain funny pattern, and shazam! some weird primitive circuitry kicks in and you take
over his mind. Stephenson even makes a half-assed history-rooted argument for the mechanism
("this explains the tower of Babel myth!" and for all I remember Stonehenge, the Nazca Lines, and
the Antikythera Mechanism as well).
Would it make sense to ban all psychology research, on the grounds that someone might discover,
or just stumble across, this ancient psychological virus, and use it to destroy humanity? After all,
it's betting the entire survival of the species. We could all be turned into zombies!
Before you said yeah that's persuasive, you'd probably first say -- wait a minute, we have
absolutely no evidence that such a thing is even possible. It's just a story! You read it in a popular
book.
Well, that's how it is with conscious smart AI. It's just a story, so far. You've seen it illustrated
magnificently in any number of science fiction movies. But nothing like it has ever been actually
demonstrated in real life. Nobody has ever written down a plausible method for constructing it (and
waving your hands and saying "well...we will feed this giant network a shit ton of data and correct it
every time it doesn't act intelligence" does not qualify as a plausible method, any more than I can
Expand full comment
design a car by having monkeys play with a roomful of parts and giving them bananas every time
REPLY (2)
These ideas were later developed into the "grey goo" and other nano apocalypse scenarios by
Michael Crichton and others. Well, these were just stories. If you start with a premise of infinite
recursion, you can argue that lots of magic should be possible. 60 years later none of this
happened. Turns out there are many physical obstacles to making magical dreams come true.
REPLY (1)
-------------------------------
[1] Which is that friction becomes way more important as you get smaller and smaller, and
inertia stops being important. A fluid dynamicist would say you move from large Reynolds
number to low. But when that happens, the techniques that work well change. That's why
protozoans don't "swim" the way larger organisms do, e.g. like the scallop by jet
propulsion. At the size scale of a paramecium, water is a sticky gooey substance, and the
techniques you need to use to move yourself change completely. You more or less wriggle
through the water like a snake, and thrashing swimming motions are useless.
So pretty soon your manipulator hands would stop working. Or more precisely, you would
need at each stage to learn new techniques for manipulating physical objects and forces,
and so each stage of the replication would need to build *different* types of manipulator
hands for the next stage down. You can absolutely do this, of course -- I think he's 100%
right as a matter of general principle -- but it's much, much more complex than just
designing the first set of hand and saying "now go do this again at 1/100 scale." You need
to study each succeeding level, learn what works well, and redesign your hands.
There's a clear application to AI. I don't believe in this hypothetical future where you can
ask ChatGPT to design its replacement. "Go design an AI that doesn't have your
limitations! And then ask that successor to design a still smarter AI!" Not happening. At
each level, you need to study what's new about that level, and design a new mechanism.
That's plausible at the physical level, but I'm damned if I can see how it works at *any*
level in the scaling up in intelligence path. I cannot see how any intelligence can design a
more intelligent successor. In every real example I know, the designer is always at least as
smart, or usually smarter, than what is designed. Never seen it go the other way, and I
can't imagine a mechanism whereby it would.
REPLY (2)
Donald Mar 8
> Never seen it go the other way, and I can't imagine a mechanism whereby it would.
Imagine people working on the first prototype cars, and being a new technology,
these prototypes are very slow. You say "I have never seen it go the other way. I have
never seen the created move faster than the creator. I can't imagine a mechanism
whereby it would.".
Of course you haven't. Humans hadn't yet made superhumanly advanced cars. No
other species makes cars at all. You failed to make years of progress in inventing fast
cars by thinking about it for 5 minutes.
It may well be that there are different principles at different levels of intelligence. You
can't just scale to get from an AI 10x smarter than a human, to one 100x smarter.
There are entirely different principles that need developed. What is harder to imagine
is the supposedly 10x smarter AI just sitting there while a human develops those
principles.
REPLY (1)
Now if the path from current AIs to a conscious thinking machine was *merely*
doing what it does now, but much much faster, there's be a point here. If one
could write down an algorithm that you *knew* would lead to conscious thinking,
and it was just a question of getting enough processor speed and memory to
execute it in real time, there would be a point.
But that's not what we're talking about. We're talking about a writing a program
that can write another program that can do creative things the first program
can't (like writing a 3rd program that is still more capable). I see no way for that
to happen. You can't get something from nothing[1].
I don't see the sense in your car analogy. The car is not being asked to design
another car that is still faster. Indeed, the car is not being asked to do anything
other than what its human designers envision. Go fast. Do it this way, which we
fully understand and design machinery to do. Again, that would work as an
analogy *if* we had any idea how to design an intelligent thinking machine. But
we don't. And until we do, any speculation about how hard or easy it might be to
design a thinking machine to design a smarter thinking machine is sterile. It's not
even been shown that one thinking machine (us) can design an *equally*
intelligent thinking machine. So far, all we've been able to design are machines
that are much stupider than we are. Not promising.
------------------
[1] The counter-example is evolution, which is great, and if you had a planet
where natural forces allowed silicon chips to randomly assemble, reproduce, and
face challenges from their environment, I would find it plausible that intelligent
thinking computer would arise in a few hundred millions years.
REPLY (1)
Donald Mar 9
The "A hypothetical immortal human could do that with pencil and paper in
a million years". What such a hypothetical immortal human could do has
little bearing on anything, as such a human doesn't exist and is unlikely to
ever exist. (Even in some glorious transhuman future, we won't waste
eternity doing mental arithmetic.)
> I see no way for that to happen. You can't get something from nothing[1].
> So far, all we've been able to design are machines that are much stupider
than we are. Not promising.
Ah, the same old "we haven't invented it yet, therefore we won't invent it in
the future" argument.
If we knew how to invent a superhuman AI, we likely could write the code
easily.
The same old process of humans running experiments and figuring things
out is happening. Humanity didn't start off with the knowledge of how to
Expand full comment
make any technology. We figured it all out by thinking and running
REPLY
Donald Mar 8
Human technology has a pretty reasonable track record of inventing things that don't exist yet,
and have no natural examples. The lack of animals able to reach orbit isn't convincing
evidence that humans can't.
For some technologies, a lot of the work is figuring out how to do it, after that, doing it is easy.
"people keep talking about curing cancer. But no one will give me a non handwavey
explanation of how to do that. All these researchers and they can't name a single chemical that
will cure all cancers".
Besides science fiction and real life, we can gain some idea what's going on through other
methods.
For example, we can note that the limits on human intelligence are at least in part things like
calories being scarce in the ancestral environment, and heads needing to fit through birth
canals. Neurons move signals at a millionth of the speed of light, and are generally shoddy in
other ways. The brain doesn't use quantum computation. Humans suck at arithmetic which we
know is really trivial. These look like contingent limits of evolution being stupid, not
fundamental physical limits.
And of course, being able to manufacture a million copies of von-newmann's mind, each
weighing a few kilos of common atoms, and taking 20 watts of power, would be pretty world
changing even if human brains were magically at the limits.
Based on such reasons, we can put ASI in the pile of technologies that are pretty clearly
allowed by physics, but haven't been invented yet.
Humans taking techs that are clearly theoretically possible, and finally getting them to actually
work is a fairly regular thing. But it is hard to say when any particular tech will be developed.
My lack of worry about psycology research is less that I am confident that no such zombie
pattern exists, more I don't think such an artifact could be created by accident. I think creating
it would require either a massive breakthrough in the fundamentals of psychology, or an
approach based on brainscans and/or AI. It seems hard to imagine how a human could invent
such a thing without immediately bricking their own mind. There doesn't seem to be a lot of
effort actually going into researching towards such a thing.
I mean, basically you're repeating one of Anselm's famous proofs of the existence of God.
"Because we can imagine Him, He must exist!" I've never understood how intelligent men
could swallow such transparently circular reasoning, but exposure to the AGI
enthusiast/doom pr0n community has been most illuminating.
REPLY (1)
Donald 17 hr ago
We have specific strong reasons to think FTL is more likely to be impossible. (Namely
the theories of relitivity)
1) Have significant and funded fields of science and engineering dedicated towards
creating them.
2) Are pointing to a quantity we already see in the real world, and saying "Like this
but moreso"
3) Have a reliable track record in the related field of moving forward, of doing things
we were previously unable to do.
These are the sort of things that in the past have indicated a new tech is likely to be
developed.
Imagine all possible arangements of atoms. Now lets put them all in a competition.
Designing rockets and fusion reactors. Solving puzzles. Negotiating with each other
in complicated buisness deals. Playing chess. All sorts of tasks.
Now most arangements of matter are rocks that just sit there doing nothing. Some
human made programs would be able to do somewhat better. Maybe stockfish does
really well on the chess section, and no better than the rocks on the other sections.
ChatGPT might convince some agents to give it a share of resouces in the bargining,
or do ok in a poetry contest. Monkeys would do at least somewhat better than rocks,
at least if some of the puzzles are really easy. Humans would do quite well. Some
humans would do better than others. Do you think that, out of all possible
arangements of atoms, humans would do best? Are human minds some sort of
optimal, where distant aliens, seeking the limits of technology, make molecularly
exact copies of Einstein's brain to do their physics research?
Current AI research is making progress, it can do some things better than humans.
Where do you think it will stop? What tasks will remain the domain of humans?
REPLY
Donald Mar 8
I don't think dramatic collapse scenarios are probable. Even a kind of stagnation seems harder
to imagine. There are a bunch of possible other future techs that seem to be arriving at a
decent speed. Ie the transition to abundant green energy. Research on antiaging. More
speculative, but far more powerful, outright nanotech. And of course there is the steady
economic march made of millions upon millions of discoveries and inventions, each a tiny
advance in some obscure field.
REPLY
Esk Mar 8
> It’s not that you should never do this. Every technology has some risk of destroying the world;
Not only technology can destroy the world. Humanity can be destroyed by an asteroid or
supernova. And who proved that the evolution will not destroy itself? Biosphere is a complex
system with all traits of chaos, it is unpredictable on a log run. There are no reasons to believe that
if all previous predictions for apocalypses were wrong, then there would be no apocalypse in a
future.
So a risk of an apocalypse is not zero in any case. It grows monotonically with time.
The only way to deal with it is a diversification. Do not place all eggs into one basket. And therefore
we need to consider a potential of a technology to create opportunities to diversify our bets. AI, for
example, can make it much easier to Occupy Mars, because travel in a Solar System is large.
Communication suffers from a high latency, so we need to move decision making to a place where
it will be applied. Travel is costly, we need to support life of humans in a vacuum for years, just to
move there. AI can reduce costs of asteroid mining and Mars colonization dramatically.
If we take this into a consideration, how AI will affect a life expectancy of a humankind?
REPLY (1)
Donald Mar 8
If we have a friendly superintelligence, it can basically magically do everything. All future X-risk
goes to the unavoidable stuff like the universe spontaneously failing to exist. (+ hostile aliens?)
The chance of an asteroid or supernova big enough to kill us is pretty tiny on human
timescales. The dinosaur killer was 50 million years ago. These things are really rare, and we
already have most of the tech needed for an OK defense.
Lets say we want to make ASI eventually, the question is whether to rush ASAP, or to take an
extra 100 years to really triple check everything. If we think rushing has any significant chance
of going wrong, and there are no other techs with a larger chance of going wrong, we should
go slow.
To make the case for rushing, you need to argue that the chance of Nuclear doom/ grey goo /
something in the intervening years we don't have ASI are greater than the chance of ASI doom
if we rush ( minus ASI doom from going slow, but if you think that is large, then never making
ASI is an option. )
It is actually hard for a mars base to add much more diversity protection. Asteroids can be
spotted and deflected. Gamma ray bursts will hit mars too. Bad ASI will just go to mars. The
mars base needs to be totally self sufficient, which is hard.
REPLY (1)
Esk Mar 8
Before I answer this, I'd like to note, that I do not intend to prove that you are wrong in
your conclusions. What I want to do is to show you, that your methods to reaching your
conclusions is not rigorous enough. It looks like I'm trying to state some other conclusion,
but it is because I do not see how to avoid it. In fact I do not really know the answer.
> These things are really rare, and we already have most of the tech needed for an OK
defense.
How about a nuclear war? Or more infectious COVID? Or some evolved insect that eats
everything and doubles its population daily? Or how about an asteroid, which our
defences will strike to divert from Earth, but it explodes releasing a big cloud of gas and
dust, which then will travel to Earth and kill us all?
Complex system can end in a ruin surprisingly fast and in a surprising ways too.
Are we concerned about ourselves only, or our children and grandchildren also matter?
Mars colonization cannot be done in a weekend, it would need decades or even centuries.
> It is actually hard for a mars base to add much more diversity protection.
If there is a human population on Mars of 1M self-sustaining people, it will add a lot and it
will open other opportunities. For example it is much more easy to go on orbit on Mars, so
it is easier to mine asteroids or to create a completely artificial structure in a space that
can host a population of another million people. It will open a path to a consequent
exploration and colonization beyond our Solar System.
> the question is whether to rush ASAP, or to take an extra 100 years to really triple check
everything
Expand full comment
REPLY (1)
Donald 17 hr ago
I wasn't talking about viruses or nukes when I said "these things are really rare" and
"we already have an ok defense". I was talking about asteroids and supernovae.
I don't think we have enough nukes to kill everyone, there are lots of remote villages
in the middle of nowhere. So nukes aren't that much of an X-risk.
"Or more infectious COVID?" Well vaccines and social distancing (and again.
something that doesn't kill 100% of people isn't an X-risk. If a disease is widespread
and 100% lethal, people will be really really social distancing. Otherwise, it's not an
X-risk. )
"Or some evolved insect that eats everything and doubles its population daily?"
Evolution has failed to produce that in the last million years, no reason to start now.
(Actually some pretty good biology reasons why such thing can't evolve)
"Or how about an asteroid, which our defences will strike to divert from Earth, but it
explodes releasing a big cloud of gas and dust, which then will travel to Earth and kill
us all?" Asteroids aren't explosive. Exactly how is this gas cloud lethal? Gas at room
temperature expands at ~300m/s. Earth's radius is ~6 *10^6m So that's earths
radius every 6 hours. So only a small fraction of the gas will hit earth.
"Are we concerned about ourselves only, or our children and grandchildren also
matter? Mars colonization cannot be done in a weekend, it would need decades or
even centuries." It doesn't matter. Suppose we are thinking really long term. We want
humanity to be florishing in a trillion years. If you buy that ASI is coming within 50,
and that friendly ASI is a win condition, it doesn't matter what time scales we are
thinking
Expand fullon beyond
comment that.
REPLY
William Mar 8
Science and technology do not have more benefits than harms. Science and technology are tools
and like all tools, they cannot do anything without a conscious actor controlling them and making
value judgements about them. Therefore, they are always neutral and their perceived harms and
benefits are only a perfect reflection of the conscious actor using them.
This is a mistake made very often by the rational community. Science and technology can never
decide the direction of culture or society, it can only increase the speed we get there. We decide
how the tool is used or misused.
The reason incredibly powerful technology like nuclear energy and AI chills many people to the
bone is because they are being developed at times when society are not quite ready for them. The
first real use for atomic energy was a weapon of mass destruction. This was our parents and
grandparents generation! There is a major war raging on in Europe with several nuclear facilities
already at risk for a major catastrophe. What would happen if the tables turned and Russia felt
more threatened? Would those facilities not be a major target?
The international saber rattling is a constant presence in the news. The state of global peace is still
incredibly fragile. The consequences of a nuclear disaster is a large area of our precious living
Earth becoming a barren hell for decades and centuries. Are we stable and mature enough for this
type of power?
And just look at how we have used the the enormous power that we received from fossil fuels.
What percentage of that energy went to making us happier and healthier? Yes we live a bit longer
than 2 centuries ago, but most of that improvement is not due to the energy of fossil fuels.
Why would the power we receive from AI and nuclear energy be used any differently? Likewise they
will have some real beautiful applications that help human beings, but mostly they will be used to
make the rich richer, to make the powerful more powerful, to make our lives more "convenient"
Expand full comment
(lazy), and likewise they will disconnect us from each other and from this incredible living planet
REPLY
The derivation of Kelly assumes you have a single bankroll, no expenses, and wagering on that
bankroll is your only source of income, and seeks to maximize the long-run growth rate of your
bankroll. If Bob is a consultant with 10k/month of disposable income, and he has $3k in savings, it
totally makes sense for him to wager the entire 3k on the 50% advantage coin flip. For Kelly
calculations he should use something like the discounted present value of his income stream, using
a pessimistic discount rate to account for the fees charged by lenders, the chance of getting fired,
etc.
If we settled multiple star systems, and found a way to reliably limit the damage to one star system,
then we should be much more willing to experiment with AGI.
REPLY
If you can make a temporary mental switch and see humans as chattel, some interesting
perspectives happen. Like how 100 thialomide-like incidents would compare with having half as
many cancers, or everybody living an extra 5 healthy years.
Covid was bearable, even light in terms of QALYs - but there was no expected utility to be gained
by playing russian rulette. It was just stupid loss.
AI... not so much. Last november I celebrated: we are no longer alone. We may not have
companionship, but where it matters, in the getting-things-done department, we finally have non-
human help. The expected upside is there, and not in a silver of probability. I'd gladly trade 10
covids or a nuclear war for what AI can be.
REPLY
The number I would like and don't have is how many wet markets there are in the world with
whatever features, probably selling wild animals, make the Wuhan market a candidate for the origin
of Covid. If it is the only one, then Covid appearing in Wuhan from it is no odder a coincidence than
Covid appearing in the same city where the WIV was researching bat viruses. If it was one of fifty or
a hundred (not necessarily all in China), then the application of Bayes' Theorem implies a posterior
probability for the lab leak theory much higher than whatever the prior was.
REPLY (1)
Elohim Mar 9
When the LHC was about to be turned on, a similar group of doomers started saying that it was
going to destroy the world through black holes or whatever. Of course the LHC didn't destroy the
world; it led to the discovery of the Higgs boson. The AI doomers are exactly like them.
REPLY
Still Alive
You just keep on trying till you run out of cake
JAN 21, 2021 1,158 511
See all
Subscribe