You are on page 1of 12

SoYouWantToBeASeedAIProgrammer

SL4Wiki | RecentChanges | Preferences


Written by EliezerYudkowsky.

The first thing to remember is...

Not everyone needs to be a seed AI programmer. SIAI will probably need at least 20 regular
donors per working programmer, perhaps more. There is nothing wrong with being one of those
donors. It is just as necessary.

Please bear this in mind. Sometimes it seems like everyone wants to be a seed AI programmer;
and that, of course, is simply not going to work. Too many would-be cooks, not enough
ingredients. You may not be able to become a seed AI programmer. In fact it is extremely likely
that you can't. This should not break your heart. You can learn seed AI after the Singularity, if we
live through it and we're all still human-sized minds - learn the art of creating a mind when it's
safe, when you can relax and have fun learning, stopping to smell the roses whenever you feel
like it, instead of needing to push as hard as possible.

Meanwhile, before the Singularity, consider becoming a regular donor instead. It will be a lot
less stressful, and right now we have many fewer regular donors than people who have wandered
up and expressed an interest in being seed AI programmers.

Seed AI. It sounds like a cool idea, something that would look good on your resume, so you fire
off a letter to the Singularity Institute indicating that you may be vaguely interested in coming on
board.

Stop. Hold on. Think for a second about what you're involving yourself in. We're talking about
the superintelligent transition here. The closest analogy would be, not the rise of the human
species, but the rise of life on Earth.

We're talking about the end of the era of evolution and the beginning of the era of recursive self-
improvement. This is an event that happens only once in the history of an entire... "species" isn't
the right term, nor "civilization"; whatever you call the entire sequence of events arising from
Earth-originating intelligent life. We are not talking about a minor event like the Apollo moon
landing or the invention of the printing press.

This is not something you put on your resume. This is not something you do for kicks. This is not
something you do because it sounds cool. This is not something you do because you want your
name in the history books. This is not, even, something that you do because it strikes you as
beautiful and terrifying and the most important thing in the world, and you want to be a part of it,
you want to have been there. These are understandable feelings, but they are, in the end, selfish.
This is something you should only try to do if you feel that it is the best thing you can do, out of
all the things you might do; and that you are the best one to do it, out of all the people who might
apply for a limited number of positions. The best. Your personal feelings are not a consideration.
Is this the best thing for Earth?

You may not get paid very well. If there are any dangerous things that happen to seed AI
programming teams, you will be directly exposed to them. At any time the AI project might fail
or run out of funding and you will be left with nothing for your work and sacrifice. If the AI
project moves to Outer Mongolia you will be expected to move there too. Your commitment is
permanent; once you become an expert on a piece of the AI, on a thing that doesn't exist
anywhere else on Earth, there will be no one who can step into your shoes without substantial
delay. Ask yourself whether you would still want to be a seed AI programmer if you knew, for
certain, that you would die as a result - the project would still have a reasonable (but not certain)
chance of success; but, win or lose, you would die before seeing the other side of the Singularity.

Do you, having fully felt the weight of those thoughts, but balancing them against the importance
of the goal, want to be an AI programmer anyway? If so, you still aren't thinking clearly. The
previous paragraphs are not actually relevant to the decision, because they describe personal
considerations which, whatever their relative emotional weight, are not major [existential risks].
You should be balancing the risks to Earth, not balancing the weight of your emotions - you
cannot count on your brain to do your math homework. If you reacted emotionally rather than
strategically, you may not have the right frame of mind to act calmly and professionally through
a hard takeoff. That too is a job requirement. I have to ask myself: "Will this person panic when
s/he realizes that it's all really happening and it's not just a fancy story?"

Certain people are reading this and thinking: "Oh, but AI is so hard; everyone fails at AI; you
should prove that you can build AI before you're allowed to talk about people panicking when it
all starts coming true." They probably aren't considering becoming AI programmers. But for the
record: It doesn't matter how much macho hacker testosterone has become attached to the
problem of AI, it makes no sense to even try to build a seed AI unless you expect that final
success would be a good thing, meaning that it is your responsibility to care about the complete
trajectory from start to finish, including any problem that might pop up along the way. It doesn't
matter how far off the possibility is; the integrity of the entire future project trajectory must be
considered and preserved at all points. If someone would panic during a hard takeoff, you can't
hire them; you're setting yourself up to eventually fail even if you do everything right.

The tasks that need to be implemented to create a seed AI can be divided into three types:

First, there are tasks that can be easily modularized away from deep AI issues; any decent True
Hacker should be able to understand what is needed and do it. Depending on how many such
tasks there are, there may be a limited number of slots for nongeniuses. Expect the competition
for these slots to be very tight.
Second are tasks that require a deep understanding of the AI theory in order to comprehend the
problem.

There's a tradeoff between the depth of AI theory, the amount of time it takes to implement the
project, the number of people required, and how smart those people need to be. The AI theory
we're planning to use - not [LOGI], LOGI's successor - will save time and it means that the
project may be able to get by with fewer people. But those few people will have to be brilliant.

What kind of people might be capable of learning the AI theory and applying it?

Aside from anything else, they need to be very smart people with plenty of raw computational
horsepower. That intelligence is prerequisite for anything else because it's what would power any
more complex abilities or skills.

Within that requirement, what's needed are people who are very quick on the uptake; people who
can successfully complete entire patterns given only a few hints. The key word here is
"successfully" - thanks to the way the human brain is wired, everyone tries to complete entire
patterns based on only a few hints. Most people get things wildly wrong, don't realize they have
it wildly wrong, and resist correction.

But there are a few people I know, just a few, who will get a few hints, complete the pattern, and
get it all right on the first try. People who are always jumping ahead of the explanation and
turning out to have actually gotten it right. People who, on the very rare occasions they get
something wrong, require only a hint to snap back into place. (My guess is that some people, in
completing the pattern, "force link" pieces that don't belong together, overriding any disharmony,
except perhaps for a lingering uneasy feeling, soon lost in the excitement of argument. My guess
is that the rare people with a talent for leaping to correct conclusions don't try to build patterns
where there are any subtle doubts, or that they more rapidly draw the implications that would
prevent the pattern from being built.)

Take, for example, people hearing about evolutionary psychology for the first time. Most people
I've seen immediately rush off and begin making common mistakes, such as postulating group
selection, seeing evolution as a benevolent overarching improvement force, confusing
evolutionary causation with cognitive motivation, and so on. When this happens I have to lecture
on the need to become acquainted with the massive body of evidence and accumulated inference
built up about which kinds of evolutionary reasoning work, and which don't - the need to
hammer one's intuitions into shape. When I give this lecture I feel like a hypocrite, because I,
personally, needed only a hint for the picture to snap into place. I've also seen a few other people
do this as well - immediately get it right the first time. They rush to conclusions just as fast, but
they rush to correct conclusions. They don't need to have the massive body of accumulated
evidence hammered into them before they let go of their precious mistakes; they just never
postulate group selection or evolutionary benevolence to begin with. No doubt many people
reading this will have leapt to a right conclusion on at least one occasion and are now selectively
remembering that occasion, but on the other hand the very rare people who genuinely do have
very high firepower in this facet of intelligence should recognize themselves in this description.
If I were to try quantifying the level of brainpower necessary, my guess is that it's around the
10,000:1 or 100,000:1 level. This doesn't mean that everyone with a 160 IQ or 1600 SAT will fit
the job, nor that anyone without a 160 IQ or 1600 SAT is disqualified. Standardized tests don't
necessarily do a very good job of directly measuring the kind of horsepower we're looking for.
On the other hand, it isn't very likely that the person we're looking for will have a 120 IQ or a
1400 on the SAT.

I sometimes run into people who want to work on a seed AI project and who casually speak of
"also hiring so-and-so who's a senior systems architect at the place I work; he's incredibly smart".
The "incredibly smart" senior systems architect is probably at around the 1,000:1 level; the
person who used the phrase "incredibly smart" to describe that level of intelligence is probably
around the 100:1 level. That's not good enough to be an AI programmer; even leaving aside the
ethical requirements, this is simply out of range for positions that can be filled by hiring friends
of friends. Maybe if you went looking on the Extropians mailing list, or a Foresight Gathering,
you would find a few potential AI programmers. To expect to find an AI programmer at your
workplace... it's not going to happen. Too much of a coincidence.

The theory of AI is a lot easier than the practice, so if you can learn the practice at all, you should
be able to pick up the theory on pretty much the first try. The current theory of AI I'm using is
considerably deeper than what's currently online in [Levels of Organization in General
Intelligence] - so if you'll be able to master the new theory at all, you shouldn't have had trouble
with LOGI. I know people who did comprehend LOGI on the first try; who can complete
patterns and jump ahead in explanations and get everything right, who can rapidly fill in gaps
from just a few hints, who still don't have the level of ability needed to work on an AI project.
You need to pick up the theory very rapidly and intuitively, so that you can learn the practice in
less than a lifetime.

To be blunt: If you're not brilliant, you are basically out of luck on being an AI programmer. You
can't put in extra work to make up for being nonbrilliant; on this project the brilliant will be
putting in extra work to make up for being human. You can't take more time to do what others do
easily, and you can't have someone supervise you until you get it right, because if the simple
things have to be hammered in, you will probably never learn the complex things at all.

So you'll learn AI programming after the Singularity and probably get more real enjoyment out
of it than we did, because you won't be rushed.

Very few people are qualified to be AI programmers. That's how it goes.

(I'm sorry if you got the point a while back and I've just been hammering it in since then, but it
takes a lot of effort to pry some people loose from the idea.)

In the third category are tasks that can only be accomplished by someone capable of
independently originating and extending AI theory. We will have to hope that all of these jobs
can be done by one person, because one person is all that we are likely to have.
I distinguish between programmers, AI programmers, and Friendly AI programmers,
corresponding to the three task types.

Some people are reading this and thinking: "Well, Eliezer, brilliant people are hard to find; you
may have to compromise." You'll be glad to know that I did compromise. I compromised away
from requiring that an AI programmer be capable of independently extending the AI theory.
Unfortunately I think that's as far as the compromise can go and retain what seems like a realistic
probability of success. I still worry that being brilliant enough to grasp AI theory is not enough,
and that it may turn out a Friendly AI project cannot in fact succeed without many people of the
caliber needed to invent AI theory. But it seems like people of that level might be impossibly
hard to find, and I see a realistic chance of succeeding with the merely brilliant - people who
fully comprehend the theory, even if they would not have been capable of inventing it and cannot
take it further, as long as they have enough hackerish creativity to apply the theory, translate it
into a systems design, without a Friendly AI programmer needing to hover over them constantly.

The inherent difficulty of the problem is logically unrelated to what seems like a "reasonable
demand", or what's easy to accomplish with resources we already have. The worker bees of an
AI project would be queens in any other hive, people who would ordinarily be inventing their
own brilliantly original ideas and translating them into designs. Finding that class of people will
not be easy. But the idea that I am supposed to personally exert enough design energy to translate
an entire AI into a drone-codable specification is hopeless. I would burn out like dry grass. I will
count it a success if I can push the project over the impassable part of the AI hump, so that the
rest of the work can be done by people with enough creative energy that they would ordinarily be
starting their own original complex projects from scratch.

Hopefully this leaves some margin for error, so that there is enough free creative energy in the
project to handle unexpected obstacles. The other problem with "compromising" on people who
are less than genuinely brilliant is that the project would always be struggling - no safety margin.
Such a project will fail the first time something pokes it.

"What should I study in order to join the team?"

Are you looking to fill one of the nongenius slots? If so, the primary prerequisite will be
programming ability, experience, and sustained reliable output. We will probably, but not
definitely, end up working in Java. Advance knowledge of some of the basics of cognitive
science, as described below, may also prove very helpful. Mostly, we'll just be looking for the
best True Hackers we can find.

But bear in mind that other requirements are universal - we can't "just hire" programmers, even if
you know some really smart guy at your workplace, etc. See the later comments about ethics,
and the earlier comments about people who might panic in a crisis.

"What should I study to become an AI programmer?"


At minimum you will need to grasp the elementary ideas of information, entropy, Bayesian
reasoning, and Bayesian decision theory; the things that bind together the physical specification
of a system and its cognitive content. You should see the "information content" of a pattern as its
improbability or its Kolmogorov complexity, rather than ones and zeroes on a hard drive; you
should be capable of explaining how the scientific method and conversational argument and the
visual cortex are all really cleverly disguised manifestations of Bayes' Theorem; you should
distinguish between subgoals and supergoals on sight and without needing to think about it.

You should be familiar with the design signature of natural selection - optimization by the
incremental recruitment of fortunate accidents, following pathways in fitness gradients which are
adaptive at each intermediate point and which are directed in the maximally adaptive direction at
each intermediate point. Not because we're going to use evolution, of course, but because you
need to know what a nonhuman design process looks like and how it behaves
nonanthropomorphically. Evolution is the only current example of this around.

You should be familiar with evolutionary psychology and game theory, not because we're
planning to build an imperfectly deceptive social agent or make it play zero-sum games, but so
that you can have applied this knowledge to debugging your own mind, and so that you can
detach your knowledge of morality from the human domain.

You should be familiar with at least a few specific examples of the intricacy of biological neural
circuitry and computing in single neurons and functional neuroanatomy, not because we're going
to be imitating human neurology, but so that you don't expect cognition to be simple. You should
be familiar with enough cognitive psychology of humans that you know human reasoning is not
Aristotelian logic - for example, Gigerenzer's fast and frugal heuristics, Tversky and Kahneman,
and so on; being able to interpret all of this as an imperfect approximation of Bayes' Theorem is
another plus.

You should have read through [Levels of Organization in General Intelligence] and understood it
fully. The AI theory we will actually be using is deeper and less humanlike than the theory found
in LOGI, but LOGI will still help you prepare for encountering it.

The four major food groups for an AI programmer:

• Cognitive science
• Evolutionary psychology
• Information theory
• Computer programming

Breaking it down:

• Cognitive science
o Functional neuroanatomy
 Functional neuroimaging studies
 Neuropathology; studies of lesions and deficits
 Tracing functional pathways for complete systems
o Computational neuroscience
 Suggestions: Take a look at the cerebellum, and the visual cortex
o Computing in single neurons
o Cognitive psychology
 Cognitive psychology of categories - Lakoff and Johnson
 Cognitive psychology of reasoning - Tversky and Kahneman
o Sensory modalities
 Human visual neurology. Big, complicated, very instructive; knock
yourself out.
o Linguistics
o Note: Some computer scientists think "cognitive science" is about Aristotelian
logic, programs written in Prolog, semantic networks, philosophy of "semantics",
and so on. This is not useful except as a history of error. What we call "cognitive
science" they call "brain science". I mention this in case you try to take a
"cognitive science" course in college - be sure what you're getting into.
• Evolutionary psychology
o Popular evolutionary psychology; dating and mating; Robert Wright and Matt
Ridley
o Formal evolutionary psychology; neo-darwinian population genetics and complex
adaptation; Tooby and Cosmides
o Game theory; nonzero-sum and zero-sum games
o Evolutionarily stable strategies for social organisms
o Tit-for-tat, the evolution of cooperation, the evolution of cognitive altruism
o Evolutionary psychology of human "significantly more general" intelligence
 Mostly this means reading LOGI; there's not much else out there.
 But see also Lawrence Barsalou and Terrence Deacon.
o Evolutionary biology
 Incrementally adaptive pathways; levels of organization; etc.
o Biology (a complex system not designed by humans)
o Genetics
o Gene regulatory networks (another good look at evolution's bizarre signature, and
also a look at the way humans actually get constructed)
o Quantitative genetics
o Anthropology - the good old days
• Information theory
o Shannon communication theory
 Shannon entropy
 Shannon information content
 Shannon mutual information
o Kolmogorov complexity
 Solomonoff induction
o Bayesian statistics
 Interpretation of human thought as Bayesian inference - see [Jaynes].
o Any other kind of statistics
o Utilitarian Bayesian decisionmaking
 Decision theory is not classically part of "information theory", but does, in
fact, belong together with the other items in this category
 Child goals, parent goals, "supergoals" (intrinsic desirability)
 Actions and desirability - read [Creating Friendly AI] as a preliminary to
the longer story.
• Computer programming
o Knowledge of many languages
o Java programming (that's probably what we'll end up doing it in)
o Being an excellent programmer
o Parallelism
 Multithreading
 Clustered and distributed computing - we may not need this for a while,
but then again, we may
o Any kind of experience working with complicated dynamic data patterns
controlled by compact mathematical algorithms - some of the interior of the AI
may end up looking like this
• Other stuff
o Computer security (experience with defensive caution; not that it's sufficient for
Friendly AI or even a good attitude, but it's a start)
o Physics
o Thermodynamics
 The second law of thermodynamics
 Noncompressibility of phase space
 The arrow of time and the development of complex structure
o Traditional AI methods
 History of error; don't repeat past mistakes
 We might reuse one or two design patterns at some point
o Transhumanism or transhumanist SF - sufficient exposure to have a very high
future shock tolerance; helps to "take it all in stride"
o Mathematics

Obviously we are not requiring a doctorate, in cognitive science or any other field, because no
doctorate is going to tell you one-fifth of what you need to know. (I am tempted to say that a
doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth
against them - just because you acquired a doctorate in AI doesn't mean you should be
permanently disqualified.)

What you need is not an academic specialization in any one of these fields, but an interested
amateur's grasp of as many of them as possible. An AI programmer doesn't need to
independently pull together a single consilient explanation from this mess of separate disciplines.
What is useful is if you understand the basics of these separate disciplines as they are usually
understood, so that a Friendly AI programmer who does know the consilient explanation has a
shared language in which to describe specific cognitive processes.

If you are to be able to implement cognitive processes on your own without constant supervision,
you will need to be very fast on the uptake, fill in patterns from a few hints and get them right the
first time, learn new skills rapidly and easily, discriminate subtle distinctions without needing
them hammered into you, and have a great deal of plain good-old-fashioned mental horsepower.
Even so, much of your first days on the project will still consist of having your suggestions shot
down over and over again, until you pick up the knack. For every AI problem there are a million
plausible-sounding methods that do nothing; a thousand methods that are sort of right or almost
right or work a little but then break down; and a small handful of manifestations of the right way.
It's going to take a while to catch the rhythm of how this works, and learn to distinguish solutions
that accomplish something from the ones that spin their wheels and go nowhere.

"Gee, y'know, I think I'll start my own AI project, in like my spare time while I'm finishing high
school. How much effort does it take to become a Friendly AI programmer?"

If you are not willing to devote your entire life to doing absolutely nothing else, you are not
taking the problem seriously. By that I mean your entire life. I don't mean that you do it as a
hobby. I don't mean that you do it as your day job. I don't mean that you give it all your time and
energy. I mean that you allocate your entire self to be sculpted by the problem into the shape of a
Friendly AI programmer.

There is nothing unusual about that. Consider human history, and how many people have
sacrificed so much more for so much less. I don't consider myself at all remarkable by
comparison with millions of anonymous people throughout human history who willingly walked
into much more inconvenience, pain, and death than I ever have experienced or expect to
experience. Total dedication is something that plenty of people can and do undertake, and it isn't
anywhere near as painful as commonly thought.

That is what it takes to start your own AI project, as a necessary but nowhere near sufficient
condition. You will also need to be as smart as it gets. In terms of raw horsepower, there are
probably tens or hundreds of anonymous world-class geniuses around for each one that makes it
into the history books. It also takes luck, perseverance, the right place at the right time, a field in
chaos so that it can be set in order, etc. But it is nonetheless true that only a world-class genius
has any hope at all.

"How much effort does it take to become an AI programmer?"

If you want to be an AI programmer you should be willing to devote most of your life and
yourself to AI. All of yourself would be better, but is not strictly necessary, and I am aware that it
is considered hard.

"What does it take to get a security clearance?"

There are two approaches to ethics. The first is to try and set up a set of tests to try and keep out
noticeably bad people. That is, you start with an "allow all" policy and then add denials.
The second approach, which makes me feel a bit ashamed because it seems exclusionary and
unfair, is to say: "Even if someone passes every formal test I can think of, I'm not going to hire
someone unless they strike me as unusually trustworthy, because anyone else is just too much of
a risk."

Having considered this at some length, I think that only the second option is safe. If something
feels wrong, but you can't really think of a "justification" for avoiding it, avoid it anyway. Doing
something that makes you feel a little bit uneasy, without really being able to say why, can be
much more dangerous than taking a known calculated risk. Often it means you know so little that
you don't have any idea how much danger you're in or how large a risk you're taking.

I know some people whose ethics are such that I would actually feel good about adding them to a
Friendly AI project. The thought of adding anyone else makes me feel a little bit uneasy.

You've probably read "The Lord of the Rings", right? Don't think of this as a programming
project. Think of it as being something like the Fellowship of the Ring - the Fellowship of the AI,
as it were. We're not searching for programmers for some random corporate inventory-tracking
project; we're searching for people to fill out the Fellowship of the AI. Far less important projects
get hundreds of millions of dollars and thousands of programmers. What is our substitute?
Knowing exactly what we're doing. Having exceptional people on the team. Otherwise it simply
isn't realistic. We are gathering for the last defense of Earth. You can't fill those positions by
running a classified ad.

From the standpoint of personnel management, the original Fellowship of the Ring was a
disaster. The only real heavyweights were Gandalf and Aragorn; two out of nine. Legolas and
Gimli were added to fill minority quotas. Boromir was allowed onto the team despite being
blatantly unreliable. No fewer than three useless hobbits shoved themselves onto the team, and
nobody screwed up the courage to kick them off. The Fellowship splintered almost immediately
after it was assembled. In real life, the Fellowship would have been strawberry jam on toast. We
can, and must, do better.

This is Earth's finest hour, and Earth's finest should meet it.

Much of what I have written above is for the express purpose of scaring people away. Not that
it's false; it's true to the best of my knowledge. But much of it is also obvious to anyone with a
sharp sense of Singularity ethics. The people who will end up being hired didn't need to read this
whole page; for them a hint was enough to fill in the rest of the pattern.

Of course you would have to dedicate your life to a Friendly AI project. Of course you have to be
brilliant. It's a Friendly AI, for goodness's sake! It's asking enough of human flesh and brain that
we build one at all, let alone that we do it with less than our full effort.
The standards are set terribly high because this is a terribly high thing to attempt. We cannot be
"reasonable" about the requirements because Nature, herself, has no tendency to be "reasonable"
about what it takes to build a Friendly AI. This is and always was an unreasonable problem.

No one who might actually be hired will be scared off by the thought of an extremely difficult
job or harsh competition. Even if they are brilliant people who haven't yet become comfortable
admitting their brilliance to themselves, they will, I expect, say something to themselves along
the lines of "If Eliezer hasn't gotten funding in a couple of years, he might have taken the design
down to the point where it could be implemented by nongeniuses, or the Singularity Institute
might need to start hiring and I, heaven help them, might end up being one of the best people
available at that point. And if not, well, I was planning to spend the first hundred years after the
Singularity studying cognitive science anyway, so I'll get a head start." Or whatever. If someone
is brilliant enough to have any realistic chance of becoming an AI programmer, and ethical
enough to be accepted, nothing I could possibly say would scare them away from applying. They
are not thinking in terms of "scared/not scared"; they are thinking in Singularity ethics.

If that describes you, please subscribe to the mailing list [seedAIwannabes@singinst.org] by


sending mail to [majordomo@singinst.org]. Plenty of people have expressed vague interest, but
SIAI has very few serious candidates. Right now we do not have enough AI programmers to put
a team together, and that is a blocker problem even if we had a million dollars tomorrow.
Subscribe even if you're not sure you're totally brilliant; if we eventually accumulate a bunch of
totally brilliant people and it's clear you're outclassed, you can always withdraw yourself from
consideration at that point. Meanwhile it may prove useful to have gathered a group of the
smartest people available at any given point.

If that doesn't describe you, then again I emphasize that not everyone has to be an AI
programmer! You don't have to be an AI programmer to help! It may be frustrating to see
something exciting, like the superintelligent transition, and not be able to run off at it
immediately, but even getting the project started is a serious strategic problem with many
simultaneous dependencies. Don't try to solve the entire problem in a single step. See if you can
figure out how to make one definite contribution. Then you can go on to consider expanding it.

A revision has rendered obsolete my previous comment. Also, it has addressed a primary
concern, which is we don't have time to build SeedAI programmers, we must find them. --
JustinCorwin

Ironic Humor Dept.: "[We assume that] ... Liouville's theorem holds. This assumption is known
as adiabatic incompressibility of phase space, or AI for short." -- MitchellPorter

This bit of ironic humor was rendered inapplicable by a revision. Sorry, Mitchell. --
EliezerYudkowsky
Legolas and Gimli were useful team members, they just couldn't independently extend ring
theory. -- Starglider

I have to wonder about the psychological effects of this essay on potential programmers. What if
they're not overly enthusiastic about the idea, but they fulfill the requirements, and are in
complete rational agreement with the essay, and would be willing to do that sort of work? What
if they simply don't have the self-confidence necessary to overcome the intertia of living life?
What if your strict requirements invoke someone's sense of modesty? The most capable people
are often the most modest.

It would require a huge jolt to get someone who agreed with the Sing. Inst.'s values to put in all
the extra study required to round out their knowledge on the chance that they'd be accepted as a
member of the team, especially if that meant getting deep into one or two fields that as of yet
have no experience in. Are you going to exclude people who don't already know almost
everything they need to know to apply immediately for the position? --ChrisCapel

It doesn't require a 'huge jolt'. It requires being a moral utilitarian. If there is a significant chance
that devoting the rest of your life to studying seed AI will save six billion lives and enable a
vastly richer future, you should do it. If you understand the issues deeply enough to consider this
in the first place, the only question is 'would the increase in the chance of being a seed AI
programmer be worth the drop in the level of donnations I could make to the SIAI, if I focused
on that'. Even that's not necessarily a problem; in my experience focusing on learning seed AI
has actually increased my expected financial contribution to the SIAI. -- Starglider

My opinion: Everyone who subscribed to the list with the words 'AI WANNABE' in it should
immediately be eliminated from consideration. -- Marc Geddes

Why? Are you kidding? --SomeAnonymousGuy

You might also like