Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
3Activity
0 of .
Results for:
No results containing your search query
P. 1
soyouwanttobeaseedaiprogrammer

soyouwanttobeaseedaiprogrammer

Ratings:
(0)
|Views: 540|Likes:
Published by DanteA

More info:

Published by: DanteA on Nov 11, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

09/14/2013

pdf

text

original

SoYouWantToBeASeedAIProgrammer
SL4Wiki| RecentChanges| Preferences
Written byEliezerYudkowsk y.
The first thing to remember is...

Not everyone needs to be a seed AI programmer. SIAI will probably need at least 20 regular
donors per working programmer, perhaps more. There is nothing wrong with being one of those
donors. It is just as necessary.

Please bear this in mind. Sometimes it seems like everyone wants to be a seed AI programmer;
and that, of course, is simply not going to work. Too many would-be cooks, not enough
ingredients. You may not be able to become a seed AI programmer. In fact it is extremely likely
that you can't. This should not break your heart. You can learn seed AI after the Singularity, if we
live through it and we're all still human-sized minds - learn the art of creating a mind when it's

safe, when you can relax and have fun learning, stopping to smell the roses whenever you feel
like it, instead of needing to push as hard as possible.

Meanwhile, before the Singularity, consider becoming a regular donor instead. It will be a lot
less stressful, and right now we have many fewer regular donors than people who have wandered
up and expressed an interest in being seed AI programmers.

Seed AI. It sounds like a cool idea, something that would look good on your resume, so you fire off a letter to the Singularity Institute indicating that you may be vaguely interested in coming on board.

Stop. Hold on. Think for a second about what you're involving yourself in. We're talking about
the superintelligent transition here. The closest analogy would be, not the rise of the human
species, but the rise of life on Earth.

We're talking about the end of the era of evolution and the beginning of the era of recursive self- improvement. This is an event that happens onlyonce in the history of an entire... "species" isn't the right term, nor "civilization"; whatever you call the entire sequence of events arising from Earth-originating intelligent life. We are not talking about a minor event like the Apollo moon landing or the invention of the printing press.

This is not something you put on your resume. This is not something you do for kicks. This is not
something you do because it sounds cool. This is not something you do because you want your
name in the history books. This is not, even, something that you do because it strikes you as
beautiful and terrifying and the most important thing in the world, and you want to be a part of it,
you want to have been there. These are understandable feelings, but they are, in the end, selfish.
This is something you should only try to do if you feel that it is the best thing you can do, out of
all the things you might do; and that you are the best one to do it, out of all the people who might

apply for a limited number of positions. Thebest. Your personal feelings are not a consideration.
Is this the best thing for Earth?

You may not get paid very well. If there are any dangerous things that happen to seed AI
programming teams, you will be directly exposed to them. At any time the AI project might fail
or run out of funding and you will be left with nothing for your work and sacrifice. If the AI
project moves to Outer Mongolia you will be expected to move there too. Your commitment is
permanent; once you become an expert on a piece of the AI, on a thing that doesn't exist
anywhere else on Earth, there will be no one who can step into your shoes without substantial
delay. Ask yourself whether you would still want to be a seed AI programmer if you knew, for
certain, that you would die as a result - the project would still have a reasonable (but not certain)
chance of success; but, win or lose, you would die before seeing the other side of the Singularity.

Do you, having fully felt the weight of those thoughts, but balancing them against the importance
of the goal, want to be an AI programmer anyway? If so, you still aren't thinking clearly. The
previous paragraphs are not actuallyrele va n t to the decision, because they describe personal
considerations which, whatever their relativeemotiona l weight, are not major [existential risks].
You should be balancing the risks to Earth, not balancing the weight of your emotions - you
cannot count on your brain to do your math homework. If you reacted emotionally rather than
strategically, you may not have the right frame of mind to act calmly and professionally through
a hard takeoff. That too is a job requirement. I have to ask myself: "Will this person panic when
s/he realizes that it's all really happening and it's not just a fancy story?"

Certain people are reading this and thinking: "Oh, but AI is so hard; everyone fails at AI; you
should prove that you can build AI before you're allowed to talk about people panicking when it
all starts coming true." They probably aren't considering becoming AI programmers. But for the
record: It doesn't matter how much macho hacker testosterone has become attached to the
problem of AI, it makes no sense to eventry to build a seed AI unless you expect thatfina l
success would be a good thing, meaning that it is your responsibility to care about thecomplete
trajectory from start to finish, includingany problem that might pop up along the way. It doesn't
matter how far off the possibility is; the integrity of the entire future project trajectory must be
considered and preserved at all points. If someone would panic during a hard takeoff, you can't
hire them; you're setting yourself up to eventually fail even if you do everything right.

The tasks that need to be implemented to create a seed AI can be divided into three types:

First, there are tasks that can be easily modularized away from deep AI issues; any decent True
Hacker should be able to understand what is needed and do it. Depending on how many such
tasks there are, there may be alimi ted number of slots for nongeniuses. Expect the competition
for these slots to be very tight.

Second are tasks that require a deep understanding of the AI theory in order to comprehend the
problem.

There's a tradeoff between the depth of AI theory, the amount of time it takes to implement the project, the number of people required, and how smart those people need to be. The AI theory we're planning to use - not[LOGI], LOGI's successor - will save time and it means that the project may be able to get by with fewer people. But those few people will have to be brilliant.

What kind of people might be capable of learning the AI theory and applying it?

Aside from anything else, they need to be very smart people with plenty of raw computational
horsepower. That intelligence is prerequisite for anything else because it's what would power any
more complex abilities or skills.

Within that requirement, what's needed are people who are very quick on the uptake; people who
can successfully complete entire patterns given only a few hints. The key word here is
"successfully" - thanks to the way the human brain is wired,everyone tries to complete entire
patterns based on only a few hints. Most people get things wildly wrong, don't realize they have
it wildly wrong, and resist correction.

But there are a few people I know, just a few, who will get a few hints, complete the pattern, and
get it all right on the first try. People who are always jumping ahead of the explanation and
turning out to have actually gotten itright. People who, on the very rare occasions they get
something wrong, require only a hint to snap back into place. (My guess is that some people, in
completing the pattern, "force link" pieces that don't belong together, overriding any disharmony,
except perhaps for a lingering uneasy feeling, soon lost in the excitement of argument. My guess
is that the rare people with a talent for leaping toc o rre c t conclusions don't try to build patterns
where there are any subtle doubts, or that they more rapidly draw the implications that would
prevent the pattern from being built.)

Take, for example, people hearing about evolutionary psychology for the first time. Most people
I've seen immediately rush off and begin making common mistakes, such as postulating group
selection, seeing evolution as a benevolent overarching improvement force, confusing
evolutionary causation with cognitive motivation, and so on. When this happens I have to lecture
on the need to become acquainted with the massive body of evidence and accumulated inference
built up about which kinds of evolutionary reasoning work, and which don't - the need to
hammer one's intuitions into shape. When I give this lecture I feel like a hypocrite, because I,
personally, needed only a hint for the picture to snap into place. I've also seen a few other people
do this as well - immediately get it right the first time. They rush to conclusions just as fast, but
they rush to correct conclusions. They don't need to have the massive body of accumulated
evidence hammered into them before they let go of their precious mistakes; they just never
postulate group selection or evolutionary benevolence to begin with. No doubt many people
reading this will have leapt to a right conclusion on at least one occasion and are now selectively
remembering that occasion, but on the other hand the very rare people who genuinely do have
very high firepower in this facet of intelligence should recognize themselves in this description.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->