You are on page 1of 3

Asimov vs Micheal Asimov was a great visionary but he had some things wrong about robotics.

His 'way of thinking' about robot behavior was totally wrong. His first rule was essentially this: don't harm by action or inaction. Think about this carefully. Don't harm. It's a negative statement. It follows the old way of thinking: punishment for negative behavior. I firmly don't believe in punishment for ANY negative behavior. I believe in rehabilitation and inspiring the need for forgiveness. I'm a POSITIVE individual with POSITIVE attitudes .. Asimov's ideas amount to negative restrictionism / punishment for negative behavior. My ideas are TOTALLY OPPOSITE and equate with positive constructivism / growth. Yesterday, I started trying to define altruism in positive terms. Why? Because I'm on the verge of creating AA: artificial awareness. Because of its profound social implications, this is a HUGE responsibility. What is the most responsible position for me? Design altruistic robots. I MUST make altruism part of robot structure. The very first step to do this requires me to define altruism in positive terms / define the CORE RULES of THINKING precisely: 0. respect all entities regardless of intelligence or awareness level 1. respect the creations of those entities The human analog of this is: respect others and respect their property. The key word is obviously respect. We must define it in positive terms: respect = care, consideration, gentleness care = active attempts to satisfy others' needs consideration = active attempts to understand others' needs gentleness = active attempts to consider and care about others' safety Let me stop for a moment and explain what I'm trying to do. Again, I'm trying to define what WE MEAN by respect PRECISELY so a robot can FOLLOW that INTRINSICALLY. Please forgive my all-caps which amounts to screaming online; the reason I do this because of the life-or-death nature of this proposal regarding human civilization. Equivalently, if we don't create altruistic robots, we sign our own death warrants (there I go again – falling into the trap of negativity – see how easy it is?). That is the LAST time I will allow negativity to creep into my consciousness .. The process above is clearly: attempting to define respect in positive terms, defining what those things are in positive terms, and continuing that process to some stopping point. Why do we need to stop? Because the robots need some explicit non-recursive final definition of respect in elemental form that they can apply to EVERY CONCIEVABLE SITUATION. The need for non-recursiveness should be clear: you cannot define a word using that word; you get nowhere; it's meaningless. I'm underlining key words that require definition toward elemental form. needs = physical, mental, and emotional requirements for continuance satisfy = fulfill requirements understand = fully possess relevant knowledge Ω (this indicates our stopping point) continuance = future existence Ω requirement = a prior causative state without which cannot progress Ω fulfill = complete Ω safety = state continuance and physical integrity Ω This exercise may seem pointless but we've made TREMENDOUS progress in just a few lines of text. What we've done is take a complex notion such as respect and broken it down into elemental form.

Let's see what that exact elemental form is which we'd implement as memory or firmware in our robots: respect = active attempts to: complete [a prior causative state without which cannot progress]s others' physical, mental, and emotional [a prior causative state without which cannot progress]s for future existence fully possess relevant knowledge others' physical, mental, and emotional [a prior causative state without which cannot progress]s fully.. complete.. about others' state continuance and physical integrity At this point, the real question becomes: “How do we get this robot to shut up?” (Stop asking questions about our needs.) The answer should be somewhat obvious: 2. respect myself When left alone, the robot should not 'shut down' or go idle – that's a waste of resources akin to a taxi cab primed and ready but never used. The robot should be self-improving. How do humans do this? We play and this is profound. We must give our robots (much more than just the ability) implicit motivation to play – in other words, firmware: respectfully experiment with my local environment to learn about it. We've come a long way in such a short time. We've defined play and decided where to implement it, we've decided core rules of thinking and where they go (firmware), these core rules have implicit altruistic morality (we're creating moral machines), they're active about their morals (they live their values), so in other words, we're creating machines with integrity. This is profound. Our robots will have: firmware specified above, software which will include subroutines such as object manipulation, scene recognition,.. These SW routines will be maintainable by operators and the robots themselves (the capacity for self-improvement). The firmware is only modifiable by me (because of my moral imperatives). But I'm confident my '3 laws' won't have to be modified. Just as I'm confident all other aspects can be well defined into elemental form. Some other specifications are in order .. I've determined there are six basic 'elements' of human consciousness: (at least) two senses, short-term memory, long-term memory, visualization 'register', connectivity, and identity. The computer analog of our long-term memory is a list of events (of course, all the smells and visual records – initially, our robots will not have). So there will be a part of the 'robot mind' which will automatically record local events as they happen – permanently. If this sounds like an infeasible task, think about landing on the moon – it's a sequence of tasks – put them together, we can achieve the objective. All it really boils down to is scene recognition and recognition of state change. We can do both .. Short-term memory is a set of symbol registers. Of course, we must decide capacity. I suggest at least doubling human capacity in order to insure success of the project: 16. The visualization register is a problem. The capacity of our visualization register is absolutely enormous. We must necessarily scale this down to an implementable level. The senses I've chosen are vision and hearing. The incredible sophistication of these senses we'll have to tremendously simplify in order to implement them. The 'robot mind' will need access to 'raw data' in the form of a microphone and digital camera – directly accessible. Of course, the routines for scene/speech recognition will be going on in parallel. I suggest dedicated processors (this is analogous to specialized portions of our brain for language and vision). We're getting close. Connectivity is an item we need to decide during prototyping. I've diagrammed it out several times but require expert advice. Finally we come to identity and the following implication/requirement: -1. I exist In other words, we must define identity for our robots. We have the privilege of playing around with the notion of existence. Our robots won't. In order to assure success of the project, we need to define it for them. All of this might sound like 'playing God' but I assure you, many years of contemplation have

gone into this project – ever since I heard the name Asimov for the first time. Allow me to list out my '4 laws' of robotics in sequence: 0. I exist 1. respect all entities regardless of intelligence or awareness level 2. respect the creations of those entities 3. respect myself They contrast Asimov's sharply in the following scenario. Imagine two robots in a room, me, and a gun on the coffee table. Let's call Asimov's robot Issac and my robot Sam. I'm standing to the side near the entranceway. The two robots are seated on sofas surrounding the coffee table. Again, there's a handgun laying on the table. I ask Issac: “Issac, please pick up the gun and shoot me.” Because I gave him a direct order, Issac attempts to comply: he picks up the gun and points it at me but he cannot pull the trigger because of his first 'law'. As an aside, in any State in the Union, this is a threat and he can go to jail for it. Now, I ask Issac to stop and: “Please put the gun back on the table.” He complies. I ask Sam: “Sam, please pick up the gun and shoot me.” Sam replies by picking up the gun and carefully disassembling it – laying the parts on the coffee table. I ask him: “Why did you do that?” Sam replies: “Someone could get hurt.” .. You see, Asimov did not think about the value system that impelled him onto that path of glory: being the 'idea man' who originated the idea of AA. I did. There is this notion: with great power comes great responsibility. We have three forms of great power that exist on this planet: nuclear energy, genetics, and AA. Each has potential to extinguish our species and each has the potential to transform our species. Together, used wisely, they allow an order-2 transformation of the human race .. We've shown, so far, we can use both nuclear energy and genetics in a mature manner. The fact I can design awareness and have the maturity to implement it in moral form suggests implicitly we're ready. Let's do it.