You are on page 1of 59

A story of three robots

MCECSBOT – a robot
guide for engineering
Marek Perkowski
Department of Electrical and Computer Engineering
Portland State University
QUBOT – a robot with
quantum brain

Portland Cyber Theatre


My Teaching
Philosophy
Building Robots to teach robotics
1. From Middle School to postdocs and visiting scholars

2. Learning by building

3. You cannot be only software guy, engineering is about hardware

4. Every year different class projects

5. Group of students for a robot.

6. Newest technologies

7. Integrate ideas and technologies from various sources


The youngest kid in my robot
program was 10 years old

The oldest collaborators


are in their eighties.
“Quantum robots for
teenagers”

“Perkowski’s Sunday
School”

8/25/2019
Philosophy - build all mechanics
by ourselves
• Use free CAD tools
• Small shop in my lab
– Using the miter sow to cut the Aluminum bars
• The new prototyping laboratory of ECE.
– Enclosures for sonar and laser sensors
– Printed circuit boards
Philosophy - make it cheap!
Construction
A “Home Parts
Depoe” robot

Screw bolt T shape bracket Square bar


Bronze spacer L shape bracket

Flat washer Machine bolt Gearbox key Gearbox hub key L shape bracket

U shape bolt Loch nut L shape bracket Aluminum spacer L shape bracket
Philosophy –
use all kits and
public domain
software, when
available

LEGO FOR ADULTS

Tetrix
MCECS-BOT
A guide robot for engineering
buildings at PSU
Goal of MCECSBOT project
• We develop an intelligent autonomous robot
Guide that will give guided tours to the MCECS
at PSU

• and be able to lead a user to a sequence of


specific locations in the basement area of the
Engineering Building and the Fourth Avenue
Building at Portland State University.
A humanoid Robot Guide
1. The humanoid robot is in a standing position and has a head with face and a pair of
arms to express emotions.
2. Guests are able to interact with MCECS-BOT
1. using on input to the robot:
1. speech recognition,
2. touchscreen,
3. vision-guided recognition of a human gestures,
4. vision-guided recognition of facial emotions.

2. using on output to the robot:


1. speech synthesis,
2. touchscreen for display of text and graphs,
3. facial expressions of the robot,
4. hand gestures of the robot,
5. body gestures of the robot,
6. Mobile base motions of the robot (dancing).

3. The guests ask to be directed to a particular classroom, location, laboratory or office


within the basement floor of the EB and FAB.
MCECS-Bot system
System Diagram Tablet/Head
(UI)

Neck

Kinect
(Vision) ArduinoMEGA Body
Microcontroller +
Base

Router PC
Motor Controllers
Sensors On Base

DC Motors Linear Actuators Limit Switch Sonar


Rotary Encoders (Wheels) (Waist) (Bumpers) (Proximity)
Person Detection and
Identification
Person Detection
The robot detects the presence of a person using the face
detection and tracking algorithms.

Person Identification
The robot identifies:
1. The age
2. The gender
3. The facial emotions
4. The hand gestures
5. The face from the data base (dean MCECS, ECE chair, other)
Integration and Innovation
Integration.
1. The integrated software system will consist of:
1. the robot architecture,
2. the learning system,
3. the navigation and mapping system,
4. the adaptive behavior, and
5. novel approaches for generating new expressive behaviors and
facial gestures.
Innovation.
1. Among leading universities in the field such as MIT, Carnegie Mellon
University, and Stanford.
2. So far, only top U.S. universities have built this kind of robots, and
our robot will be more advanced in the area of expressing its
“emotions”, language communication and recognizing emotions of
the visitors.
3. Additionally, MCECS-BOT would be the primary platform for human-
robot interaction research, particularly in the application of: Robot
Theatre, tele-presence, assistive robotics, and social robotics.
Robot Base
Omni Wheels
Mecanum Wheels
Mecanum Wheels
Kinematics of Omni Wheels
Omni Platform Movement
Kinematics of Mecano Wheels
Mecanum Platform Movement

forward right

Rotate in place
MCECSBOT Base Model
Mecanum Wheel & Gearbox

P60 Gearbox & RS555 Motor - 12V


8" Mecanum Wheel • 64:1 Ratio.
• Torque of 13.17Nm
• The maximum weight • Angular speed 121rpm
supported is 180Ibs.
• 2.5Ibs each.
Base
• Current State:
– Base assembled.
– Battery selected for
~2 hours (normal
operation).
– Testing to determine
best
PC ArduinoMEGA
proximity/obstacle Microcontroller
avoidance policy.
Limit Switch Sonar
DC Motors (Bumpers) (Proximity)
Rotary Encoders (Wheels)
Base
• Controllers
– PC
– ArduinoMEGA
– Motor Controller
• Sensors:
– Limit Switch (x8)
– Sonar (x12)
– Rotary Encoders (x4) ArduinoMEGA
• Actuators: PC Microcontroller
– DC Geared Motor (x4)
Limit Switch Sonar
DC Motors (Bumpers) (Proximity)
Rotary Encoders (Wheels)
Base
• Motor Controller:
– Controls wheels
– Rotary Encoders
– Closed-loop PID
controller
• Obstacle detection:
– Limit Switch for bumpers
(emergency stop on
collision).
– Sonar for obstacle
detection & avoidance
(15cm - ~6m range).
– Laser Range Finder
The Gearbox part
• Using the U bolts and two socket screw bolts to mount each gearbox to
the Aluminum bars.Work in design
• Using one Aluminum Spacer for each socket bolts.

• Use only screws philosophy.


Navigation
Software
Motion and Navigation details
1. Navigate the hallways
2. Avoid hitting people and other obstacles
3. Uses:
1. sonar,
2. encoders
3. Laser range finder
4. Two Kinect vision systems.
4. The knowledge of the basement area of the
Engineering Building/Fourth Avenue Building is
manually programmed
5. A map of the building is used.
Navigation Principles
Once MCECS-BOT understands the person’s
request, it will:
1.autonomously navigate the building,
2.safely navigate the building (do not
harm humans, walls, furniture)
3.reach the sequence of requested
locations,
4.return to the robot’s base.
Robot Body
Computer model

• Base parts
1- Mecanum Wheels
2- Banebots Gear box.
3- RS555 Dc-motors.
4- suber tooth motor drive.
5- Arduino mega.
6- Battery.
• Construction parts
1- Aluminum Bars.
2- Bolts & nuts.
3- Brackets.
4- U bolts.
5- Washers and spacers.
6- 500 hub key.
7- mechanical keys..
MCECS-BOT
• What does it look like now?
Body – linear actuators
• 4 Degrees of Freedom for
expressive body gestures,
dance motion, etc.
– Tilt
– Rotation
• Body is a base for:
– Head
– Arms
– Kinect
– User Interface (tablet, ArduinoMEGA
buttons, lights, PC Microcontroller

speakers) Linear Actuators


(Waist)
Body

• Controllers:
– ArduinoMEGA
– Sabretooth motor
driver
• Sensors:
– Encoder (built-in in
actuators)
• Actuators:
– Linear Actuators (x4)
– Stepper motor (x1)
(not implemented yet)
Linear
actuators
Robot Head
Head/Neck

• Controllers:
– Arbotix
• Actuators:
– Bioloid servos (x2)
Head and Neck.
• Neck:
– 3 Degrees of freedom.
• Head:
– Cartoon face on tablet device.
– Sculpted top
– Facial gestures
– User Interaction.
– Display responses.
– Tablet programming
Head/User Interface
• Microphone/Camera inputs (possibility)
• Gives users text and visual feedback
• Manual input feedback for users
Robot Arms
Interaction
Software
Natural Language conversation
1. Communicate in a subset of English

2. Communicate through menu systems

3. Communicate standard greetings in Chinese, Spanish,


German and other languages of potential visitors.

4. This conversation will be also related to self-diagnostics of


the robot hardware/software system.

5. Users can interact with and give commands to MCECS-BOT


using speech by means of a set of keywords and simple
English sentences.
Information
Software
Semantic Knowledge
1. The robot advertises the engineering programs at
PSU
2. The robot answers in English standard questions
like sizes of classes, or types of degrees awarded.
3. The robot has access to Internet to gain new
knowledge
4. The robot will have semantic network and some
kind of Artificial Intelligence for simple
associations and reasoning.
5. The robot can do some simple calculations when
asked, such as averages, etc.
Computer
Vision
Vision
• Kinect Interface
• Object detection/recognition
• Face/human
detection/recognition
– Human emotions in face
Kinect
– Human emotions in gestures
(Vision)
– Control gestures
• Navigation/localization
– Edge detection
– Hough Transform
– Hallway Vanishing point
• Tools based on Machine Learning PC
Skeletons and distance maps
from KINECT
Viola-Jones for face detection
Canny Edge
Detector + Hough
Transform +
Morphology
Personality
of a robot?
Collaboration with art and
psychology students
Personality
• Butler • Think like:
• Servant – Alfred (Batman)
• Distinguished
• Cheerful
• Fat
• Kind
• Funny
• Strong
• Beefy
• Gentleman
• Proper
• Dry sense of humor (with jokes and
proverbs)
• Dances
• Proud
• Egocentric
What we would like it to look like?

• Brainstorming sessions
Research.
• Computer vision,
– Human robot interface
• Autonomous navigation,
– Localization
– Dynamic SLAM
– KF, EKF, UKF, particle filters.
• Natural language processing,
• Speech recognition and synthesis,
• Machine learning,
• Kinematics,
• Aesthetic motion and behavior generation,
• Reasoning with Intelligent data bases.
• Multi-disciplinary research collaboration project.
Internet access.
1. The project will be completely
connected to the PAVE frame on
Internet (developed by Prof. Fei Xie
from CS and his team).

2. This way, our robot and PSU labs will be


visible from the entire world and
viewers from other countries will be
able to control the robot.
Software
1. Public-Domain and University research application
software is used whenever possible
2. We will utilize open-source system software as
much as possible.
3. MCECS-BOT runs under Ubuntu Linux Operating
System.
4. The vision system uses OpenCV image processing
library,
5. The speech recognition system uses the Pocket-
Sphinx software from Carnegie Mellon University,
6. The speech synthesizer system uses FreeTTS
software

You might also like