Professional Documents
Culture Documents
DigitalCommons@UMaine
Electronic Theses and Dissertations Fogler Library
Fall 12-18-2015
Recommended Citation
Ebrahimi, Arezoo, "Design, Manufacturing and Control of an Advanced High-Precision Robotic System for Microsurgery" (2015).
Electronic Theses and Dissertations. 2378.
http://digitalcommons.library.umaine.edu/etd/2378
This Open-Access Thesis is brought to you for free and open access by DigitalCommons@UMaine. It has been accepted for inclusion in Electronic
Theses and Dissertations by an authorized administrator of DigitalCommons@UMaine.
DESIGN, MANUFACTURING AND CONTROL OF AN ADVANCED
FOR MICROSURGERY
By
Arezoo Ebrahimi
A THESIS
Master of Science
December 2015
Advisory Committee:
On behalf of the Graduate Committee for Arezoo Ebrahimi I affirm that this
manuscript is the final and accepted thesis. Signatures of all committee members
are on file with the Graduate School at the University of Maine, 42 Stodder Hall,
Orono, Maine.
ii
LIBRARY RIGHTS STATEMENT
advanced degree at the University of Maine, I agree that the library shall make it
freely available for inspection. I further agree that permission for "fair use" copying
understood that any copying or publication of this thesis for financial gain shall not
Signature:
Date:
DESIGN, MANUFACTURING AND CONTROL OF AN ADVANCED
FOR MICROSURGERY
By Arezoo Ebrahimi
workspace and hand motion, steady hand movements, manipulating delicate thin
tissues, and holding the instrument in place for a long time. New developments in
robotically-assisted surgery can highly benefits this field and facilitate those
complicated surgeries. Robotic eye surgery can save time, reduce surgical
complications and inspire more delicate surgical procedures that cannot be done
In this thesis work, the requirements for ophthalmic surgeries were studied and
based on that a robotic system with 6 DOF is proposed and designed. This robotic
system is capable of handling the position and orientation of the surgical instrument
with theoretical accuracy of 10 μm. The design features a remote center of motion
that defines the point of entry into the eye or patient’s body. The forward and
inverse kinematics equations and workspace analysis of the robot is also discussed
and presented.
Six miniature DC motors with their PID controllers were installed on robot arms in
order to run 6 DOF systems. Therefore, the dynamic behavior of a DC motor was
studied and modeled and then the position and velocity transfer functions were
derived and used to study the behavior of the system and also to manually tune the
PID controller.
controller modules, Controller Area Network (CAN) and the controller software
were discussed as well. The graphical user interface called EPOS Studio and
The completion of this undertaking could not have been possible without the
Besides my advisor, I would like to thank the rest of my thesis committee: Prof.
Vincent Caccese and Dr. Xudong Zheng, for their encouragement and useful
comments.
Most of all, I would like to thank my family for their countless love and endless
iii
TABLE OF CONTENTS
iv
1.4.2 EYE-RHAS (Eye Robot for Haptically Assisted Surgery)........................... 20
v
3.4.2 Frequency Domain Analysis ................................................................................ 58
4.1 Encoder................................................................................................................................. 70
BIBLIOGRAPHY ............................................................................................................................. 93
vi
APPENDIX A: MATLAB CODE FOR FORWARD KINEMATICS ...................................... 99
............................................................................................................................................................ 103
vii
LIST OF TABLES
Table 4-1: DIP switch setting for robotic system with 6 nodes. ........................................ 77
Table 4-2: Data sheet to convert the motion of six axes to quad count (qc) unit ....... 86
viii
LIST OF FIGURES
Figure 1-1: Block diagram for standard master-slave robotic surgical system
Figure 1-3: Anatomy of the eye - anterior and posterior parts. ........................................... 6
Figure 1-9: Creating the main incision (ORBIS Cyber-Sight 2006) .................................. 12
Figure 1-11: Hydro-dissection (ORBIS Cyber-Sight 2006) and (Robinson 2012) ..... 13
Figure 1-15: Range of angles that an instrument enters the eye (α) ............................... 17
Figure 1-17: Eye-RHAS, eye robot for haptically assisted surgery (Meenink
2011) ............................................................................................................................... 21
Figure 1-19: Robotic slave system for vitreo-retinal surgery (Ueta, et al. 2009) ....... 22
ix
Figure 1-20: Robotic master-slave system (Ueta, et al. 2009) ........................................... 23
Figure 2-4: Ring and linear motion, Axis-I & II. ....................................................................... 30
Figure 2-7: Link parameters: length, twist, offset and angle (Craige 2004) ................. 36
Figure 2-11: Double-rotor conic workspace; positive and negative sections. ............. 45
Figure 3-2: Unit impulse 𝛿(𝑡) and response to a set of individual impulses. ............... 47
Figure 3-7: The step and impulse responses of the position transfer function........... 56
x
Figure 3-8: The step and impulse responses of the velocity transfer function. .......... 57
Figure 3-9: The position and velocity responses of the system to the voltage
frequency. ...................................................................................................................... 58
Figure 3-10: A simple closed loop system with proportional controller. ...................... 59
Figure 3-11: The proportional controller step response for Kp=1. .................................. 61
Figure 3-12: The proportional controller step response for Kp=5. .................................. 62
Figure 3-15: The block diagram of closed loop PID controller. ......................................... 64
Figure 3-16: The PID controller step response for Kp=1 (left) and Kp=50 (right). .... 66
Figure 3-18: The PID controller step response for Ki=1. ...................................................... 67
Figure 3-19: The PID controller step response for Ki=100. ................................................ 67
Figure 3-20: The PID controller step response for Kd=0 (left) and Kd=1 (right). ....... 68
Figure 3-21: The PID controller step response for Kd=2 ...................................................... 68
Figure 4-1: The microsurgery robotic system with its motors and controllers
Figure 4-4: Physical layout of the six motor controllers CAN bus. ................................... 74
xi
Figure 4-7: Controller (III) in operating status with the Green LED “ON”. ................... 78
Figure 4-9: Motor/encoder connector (left) and DC brushed controllers (V) &
Figure 4-11: The EPOS Studio screen with the Workspace tab open. ............................. 82
Figure 4-13: Different operating modes under the “Tools tab”. ........................................ 84
Figure 4-14: Data entry for 45 degree motion of node 1 (around the ring) in
xii
CHAPTER 1 : INTRODUCTION
assembly, packaging, product inspection, testing and many more. The advantages of
employing robotic systems are due to their ability to produce more accurate and
high quality work in a short amount of time which significantly reduces the cost of
production and makes them the best fit to perform many industrial tasks.
autonomous or semi-autonomous. Also there are some tasks that require such high
level of accuracy and precision that cannot be achieved using human hands. This is
one of the most promising attributes of robots and based on this feature medical
patients and doctors. It requires versatility and a specific design for certain
applications. Medical robots can assist in two different areas of health care and
surgeries.
Health care robots are designed to perform a variety of tasks to help patients,
elderly or individuals while surgical robots are designed to assist surgeons perform
very specific tasks with high accuracy and less trauma. The objective of these robots
1
procedures more ergonomically and non-invasively with more precision and fewer
efforts.
One of the fields that highly benefits from robotic advances is robotic surgery.
Surgery performed by surgeons is quickly moving towards less invasive and more
precise surgery to reduce hospitalization time and expedite recovery and healing.
results, these are merely marginal and do not reflect the real needs for effective
techniques (Gheshmi 2012). Therefore, the need for utilizing new techniques in
performing surgeries can be seen through robotic surgery (Patel 2008). There are
some surgical procedures that cannot be done by hand or they are extremely
difficult to perform, such as delicate eye surgeries or brain surgery in which tiny
surgeons practically use other indirect methods for those kinds of surgeries that
cause more trauma and increase healing and hospitalization time and cost in some
cases.
and Gheshmi 2014). There have been recent successes in endoscopic and
less time and with less pain. For example, in laparoscopy, large incisions have been
substituted by a few small incisions (up to five, 5-12mm in diameter) and therefore
2
there’s no need to cut the body open anymore to reach inside the abdominal area.
The robotic arms are equipped with end-effectors having grippers, haptic, optical
and force sensors and other surgical tools which perform their functions and
2012) (Lambaudie, et al. 2010) to emphasize that the robot is fully under the control
of the surgeon. A master-slave system is the common layout used in robotic assisted
surgery which consists of master console, controller and slave robot. The slave
system manipulates the surgical instruments and tools, while the surgeon does the
surgery by controlling the master console. Figure 1-1 shows a block diagram of this
Figure 1-1: Block diagram for standard master-slave robotic surgical system
(Mahpeykar 2013).
3
Tele-operation is one of the spectacular advantages of this system, enabling the
surgeon to perform the surgery from a remote location. One of the commercially
Surgical 2013). Performing surgery using this system significantly reduces the
diameter) that enter the human body via small incisions, often fitted with a trocar.
These instruments are manipulated outside the body and their movements
inside the body are inverted and scaled by the instrument pivoting point (at the
insertion site).
One of the other benefits of using a master-slave layout is scaling the surgeon’s
hand motion and filtering the tremor which will reduce the probability of unwanted
collision between surgical tools with adjacent tissues in high precision surgeries and
Eye surgery, also known as ocular surgery, is surgery performed on the eye or its
a fragile organ, and requires extreme care before, during, and after a surgical
procedure. The anatomy of human eye is shown in Figure 1-2. An expert eye
surgeon is responsible for selecting the appropriate surgical procedure for the
4
motion. One of the eye surgeries performed on the inner side of the eyeball is
refractive cataract surgery. The other type is external eye surgery which is
performed on the rectus muscles controlling the eye movement. Refractive surgery
is often external, only affecting the cornea (LASIK 1 for example) or can be
intraocular. Figure 1-3 shows the anterior and posterior sections of the eye. The
majority of eye surgeries, almost 80% of cases, are performed on the anterior part
of the eye.
When the eye’s naturally clear lens becomes opaque or clouded, it’s called a
cataract. Most cataracts are the result of the natural process of aging. Others may be
surgery, one of the most common operations performed in the U.S., clears up the
cloudiness. Figure 1-4 shows a cataractous eye lens compared to normal clear lens.
6
Figure 1-5: Effect of cataract on vision
Cataracts could be treated using eye drops in the early stages of development.
cataract surgery, the cloudy lens is removed or cleaned out and replaced by a clear
manmade lens called artificial intraocular lens (IOL). With over 1.8 million surgeries
a year in the US, cataract surgery is the most common eye surgery in the United
The glacoma disorder happens due to damage of optic nerve which is usually
resulted from high pressure in the eye due to the aqueous humor. The reason of
high pressure is unbalanced production and drainage of aqueous fluid inside the eye
globe. Unfortunately the damage is permanent however there are some surgical
7
treatments to control the situation and prevent the eye from further damages. These
surgical treatments involve with opening the blocked schlemm's canal and
trabecular mesh tissues to let the aqueous humor fluid drain properly and reduce
A refractive problem is an issue of eye lens to focus parallel rays of light properly
to produce a clear image on the retina. Any defect in the shape or curvature of the
cornea, length of the eyeball or age of the lens can cause refractive errors. This
problem can be temporarily corrected using eye glasses or contact lenses. To treat
surgical treatment the cornea is cut open to access the eye lens and then an excimer
problems related to the retina such as retinal detachment, vitrectomy and diabetic
retinopathy.
The retina is responsible to generate electrical impulse of image and sends this
information via the optic nerve to the brain. This image is created from focused light
rays entering the eye through the cornea and the pupil. Even though light is needed
to generate an image, it also can damage retinal structures when too much of it is
8
To fix the damaged retina, the vitrectomy surgery is performed by making three
Extraocular muscle surgery is the third most common eye surgery in the United
States (American Society of PeriAnesthesia Nurses 2012). Eye muscle surgery treat
diseases related to the alignment of eyes such as strabismus. Eye muscles are
small incision is made in the thin white tissue (conjunctiva) overlying the muscle.
The muscle is then separated from the eye and reattached in a new position.
includes more types such as oculoplastic surgery, corneal surgery, eye removal,
and so on. In the next section, cataract surgery is discussed in detail as it is one of
the target surgeries of the intended robotic system design. Design requirements are
mostly derived from the procedures in cataract surgery as they are applicable to
natural lens with an artificial intraocular lens (IOL). The most popular technique to
9
perform this surgery is Phacoemulsification. In this method an ultrasonic needle is
used to emulsify the lens and extract out the particles through small incisions. Also
there are other methods that remove the entire natural lens in one or few pieces
which require bigger incisions that takes longer to heal (Mahpeykar 2013).
This injection dilates the pupil to provide a better access to eye lens.
The eye lids are held open using a speculum to prevent blinking
3) Making incision:
10
Figure 1-7: Making incision (ORBIS Cyber-Sight 2006).
To stabilize and maintain the shape of the cornea and also pressurize
5) Main incision:
Figure 1-9.
11
Figure 1-9: Creating the main incision (ORBIS Cyber-Sight 2006)
6) Capsulorhexis
In this step a portion of the lens capsule is cut open to access the lens.
in Figure 1-10.
7) Hydrodissection
inserted through the small incision and injects a jet of BSS behind the
lens, Figure 1-11. This is verified by rotating the lens in place to make
12
Figure 1-11: Hydro-dissection (ORBIS Cyber-Sight 2006) and (Robinson 2012)
is inserted into the anterior chamber of the eye through the main
instrument while its particles aspirates through the tip, Figure 1-12.
Now that the capsule is clean, a folded IOL, as shown in Figure 1-13, is
inserted into the lens capsule via the main incision and then the IOL is
13
Figure 1-13: IOL insertion.
10)Viscoelastic removal
After the implantation of IOL, the viscoelastic gel is removed from the
As the final step the surgeon seals the incisions by injecting some BSS
to hydrate the incisions so that they do not leak and the pressure of
The designed robotic system must be able to deliver the required tasks while
mentioned are as follows (Taylor, et al. 1991). The robot must be fully under control
and the surgeon must be in charge at all times. The instrument tip should stay
14
within a pre-specified positional envelop and never exert excessive force on the
patient.
Also, the robot must be either allowed to fail to a safe state or it must continue to
considered:
that the degrees of freedom manipulating the orientation and position of the
instrument tip need to be rigid and consistent. This also leads to a minimally
invasive surgery.
3) Motion of the surgical instrument must be very steady and accurate with
Eye surgery involves very thin organs. The thickness of the retinal membrane
is only a few microns. The vessels attached to the retina are 50-150 μm in
diameter. Therefore the accuracy of the robot must be less than 50 μm.
Meaning, if the calculated accuracy of the system is above 50 μm, the robot is
15
1.3.2 Required Performance and Degrees of Freedom
posterior surgical procedures. For anterior surgical procedures the position of the
instrument must be tangent to the cornea chamber, while in posterior surgical tasks
it is more towards the inside of the eye as shown in Figure 1-14. Here the angle β is
introduced as the angle between the tool axis and the optic axis of the eye and will
retinal manipulation and so on. This angle which is here called α is shown in
Figure 1-15.
16
Figure 1-15: Range of angles that an instrument enters the eye (α)
Based on the procedure requirement, the range for α angle is -45°-315°and for β
angle is 0°-90°.
To define the degrees of freedom for this robotic system different ophthalmic
For this device to be capable to perform different eye surgeries the robot must
have 6 active degrees of freedom to control the position and orientation of the
surgical instrument. Two of these are to relocate the entry point to the eye; two are
required to orient the tool about this entry point; one is required to slide the
instrument in and out of the eye; and one is required to rotate the tool about its axis.
The required velocity is usually low for microsurgeries to keep the accuracy
17
cataract surgery during capsulorhexis step when the tip of the instruments move 10
the instrument.
considered since required higher forces compared to other types. This surgery
advanced instrument equipped with a tri-axial force sensor (X, Y, Z) was used
(Jagtap and Riviere 2004) to measure required force for three mentioned
and 20mN. In Z direction this was: 375 mN, 75 mN and 575 mN (RMS).
peeling, 490 mN for vessel puncture and 5870 mN for vessel dissection (Meenink
2011).
18
Da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA) is the most well-
known platform in robotic surgery. A feasibility study of eye surgery with the da
Vinci® system was done by (Bourla, et al. 2008). In this study, modified robotic
removing intraocular foreign body, and anterior capsulorhexis with the da Vinci
Instruments are adapted for the da Vinci® by getting them stuck onto a metal
plate, Figure 1-16 (Pitcher, et al. 2012). The plates are grasped by standard robotic
This study shows robotic instruments gives a full range of movement. The
dexterity of the robotic arms is excellent, with steady instrument motion. However,
controlling the robotic arms was not as intuitive as moving the wrist. A high stable
19
camera realignment, which here inferior visual feedback compared with an
2011). This is a two arm robotic system with 4 degrees of freedom each arm and a
fixed remote center of motion and an additional degree of freedom to actuate the
gripper motion of the instrument. The arms can be mounted on the patient’s table.
The master console as shown in Figure 1-18 is two joysticks controls by the
surgeon. The surgeon’s movements scales down and tremor filters out the by the
controller. This system is not capable to relocate the entry point to the eye during
20
Figure 1-17: Eye-RHAS, eye robot for haptically assisted surgery (Meenink 2011)
surgery (Ueta, et al. 2009). This robot has five degree of freedom, shown in Figure 1-
21
19. The system uses master console with a generic seven degree of freedom
featuring 3D vision system, shown in Figure 1-20. Instrument axis has a motion
range of ±50° rotation and -29 to 32 mm axial penetration, and 360° for rotation
The system features a fixed remote center of motion. The accuracy for this slave
Figure 1-19: Robotic slave system for vitreo-retinal surgery (Ueta, et al. 2009)
22
Figure 1-20: Robotic master-slave system (Ueta, et al. 2009)
improve precision and amplify force feedback the system provides scale-down
23
The slave robot has two compact arms with 2.5 cm diameter and 6 DOF, shown
Figure 1-21. All joints are cable-driven and are connected to a cylindrical ø120 mm
base at the top that contains the motors. The accuracy of surgical instrument is 12
The master arms are very similar to the slave arms and have the same
et al. 2012) the complexity of the control software and the lack of mechanical
research collaboration between Jules Stein Eye Institute and the UCLA Department
(Rahimy, et al. 2013). The IRISS system features a head-mounted “True Vision”
(True Vision Displays, Inc., Cerritos, CA) stereoscopic visualization system, two
joystick controls with tremor filtration and scaled motion, appropriately sized
24
Figure 1-22: UCLA's IRISS robotic eye surgery system.
The slave manipulator, shown in Figure 1-22, can hold two surgical instruments
simultaneously. Two motors drive the curved guides on a XY bed and have
perpendicular axes with the curved guides. The instrument is then mounted on a
carrier that slides over the curved guide. The carrier provides the axial motion of
25
CHAPTER 2 : DESCRIPTION OF DESIGNED ROBOTIC SYSTEM
To design the surgical instrument for eye surgery, the requirements discussed in
must have 6 degrees of freedom which two of them are reserved to relocate the eye
entry point. The first one must provide 45° to 315° peripheral manipulation range
and the second one is to provide 0° to 90° vertical manipulation range. The entry
point to the eye must be fixed, while the surgical instrument must have at least 4
degrees of freedom about the entry point. Also the instrument must be capable of 30
mm penetration inside the eye and exerting push and pull forces up to 5.8 N. The
design must be compact and easy to install and use with accuracy of less than 50 μm
(Mahpeykar 2013).
In this design the entry point is mechanically fixed. The advantages of this
method over any other method which fixes the entry point by software is its higher
accuracy and safety since it requires less actuation and is less influenced by
This approach uses the Remote Center of Motion (RCM) concept. This RCM point
is the intersection point for all degree of freedom axes as well as the entry point to
26
Table 2-1: Degrees of freedom.
1 φ Ring motion
To relocate the entry point a 2 DOF planar mechanism is required. Since the
instruments enter the eye from the periphery, a polar ring system seems to be the
best choice. This mechanism can control an angle and a radius to reach to a point.
The mechanism chosen to provide 4 degrees of freedom from the entry point is
the double rotor mechanism due to its simplicity, slender design and low number of
moving parts. The configuration and design of ring and double rotor mechanisms is
Ring mechanism is designed to generate a rotary motion about the eye and its
350 mm diameter provides required workspace for the optical devices above the
patient’s head. To manipulate the radial position, a linear joint is used. The
27
The ring itself serves as the surgical head-master and holds the system together.
The linear joint acts like an overhead crane positioning the double-rotor RCM in the
radial direction. These two in combination provide a rigid platform for controlling
the RCM.
slide. The carriage has a motor driven pinion that is meshed with the external gear
The double-rotor mechanism consists of two rotors and three axes holding
constant angle of 22.5° between them by the designed linkages as seen in Figure 2-2.
This is a right-angle conic mechanism which cone’s apex is the RCM point.
28
Figure 2-2: Double-rotor mechanism
Here, the rotations about the primary and secondary axes of the double-rotor
mechanism are called θ1 and θ2 respectively. A linear joint with a 45° angle with the
29
2.2 Axes of Motion
The ring motion is defined as Axis-I of this robotic system with range of 315°
since the other 45° of the ring is used to mount it on place. The lead screw linear
joint bolted to the bottom of the ring forms the Axis-II of motion. Precision acme
lead screw with anti-backlash nut is used to provide a rigid positioning mechanism.
See Figure 2-4. This lead screw mechanism accommodates a range of linear motion
up to 90 mm.
The 45°-link which is installed to the anti-backlash nut of the linear joint from
one side and is connected to the double-rotor mechanism to the other side forms the
Axis-III of the system which is also the primary axis of the double rotor.
The secondary axis of the double rotor is Axis-IV of the system and has the same
design as Axis-III. The first and secondary axes of the double rotor both
30
Figure 2-5: Double rotor joints, Axis III & IV.
A linear joint connected to the end of double rotor mechanism forms Axis-V.
This axis is parallel to surgical tool’s axis and has basically the same design as Axis-
II. To position the surgical instrument a custom made nut with two linear bearings
on the sides and an anti-backlash nut in the middle is used. This linear joint
instrument rotation about itself which provides a range of 360° rotation of surgical
31
Figure 2-6: Axis-V and VI, instrument carrier
In this design motors are direct drive and placed on the joints to eliminate
unnecessary backlash in the system and delivers more efficiency and accuracy.
However, this increases the moving mass of the system and causes some wiring
complications but since light miniature motors are readily available and can be used
To properly select motors for this design required force and velocity for each
axis of the robot listed in Table 2-2 base on simple calculation and estimations
32
Table 2-2: Actuators requirements.
Max. Continuous
Joint Max. Cont. Velocity Cont. Power 100% Desired Position
Mechanism and intermittent
Type Required drive efficiency Accuracy
Force/Torque
7N
Axis II Linear Precision Lead Screw 25 mm/s 0.5 W 50 μm
20 N
200 N.mm
Axis III Rotary Direct Geared motor 100 rpm 2W 0.5°
320 N.mm
33
80 N.mm
Axis IV Rotary Direct Geared motor 100 rpm 0.8 W 0.5°
130 N.mm
5N
Axis V Linear Precision Lead Screw 25 mm/s 0.2 W 50 μm
8.5 N
20 N.mm
Axis VI Rotary Direct Geared motor 200 rpm 0.5 W 1.5°
35 N.mm
Table 2-3: Selected motor specifications.
Axis I 22 mm 25 W 24 V 512 19
Axis II 22 mm 25 W 24 V 512 1
Axis IV 16 mm 8W 12 V 512 19
This system has two linear joints with lead screw and anti-backlash nut design to
provide required positioning accuracy. For the Axis II, a ¼’’ screw with 1.27 mm
lead (0.05”) is used. Therefore, the linear precision of Axis II can be calculated as
follows:
1.27 𝑚𝑚/𝑡𝑢𝑟𝑛 𝜇
𝐴𝑥𝑖𝑠 𝐼𝐼 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = × 1000 = 𝟐. 𝟒𝟖 𝝁𝒎
512 𝐶𝑃𝑇 𝑚𝑚
1.22 𝑚𝑚/𝑡𝑢𝑟𝑛 𝜇
𝐴𝑥𝑖𝑠 𝑉 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = × 1000 = 𝟐. 𝟑𝟖 𝝁𝒎
512 𝐶𝑃𝑇 𝑚𝑚
34
For Axis I, a pinion used to mesh with the ring gear has 19 teeth and the ring gear
has 456 teeth and a ratio of 1:24 is between actuator shaft and the ring motion. The
1 1 360°
𝐴𝑥𝑖𝑠 𝐼𝐼𝐼 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = × × = 𝟎. 𝟎𝟎𝟖°
512 84 1
The same configuration with a 19:1 gearhead instead, is used for Axis IV. Therefore
1 1 360°
𝐴𝑥𝑖𝑠 𝐼𝑉 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = × × = 𝟎. 𝟎𝟑𝟕°
512 19 1
1 1 360°
𝐴𝑥𝑖𝑠 𝑉𝐼 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = × × = 𝟎. 𝟏𝟕°
128 16 1
35
2.4 Forward Kinematics
Now that all the link frames at each joint are defined, kinematic relations between
this method the geometry of each connection (link) in the mechanism is defined by
4 parameters namely: length, twist, offset and angle. Link parameters for a general
connection are illustrated in Figure 2-7. After assigning coordinate frames to each
αi-1 (twist): the angle from Zi-1 axis to Zi axis measured about Xi axis
Figure 2-7: Link parameters: length, twist, offset and angle (Craige 2004)
36
A 4x4 matrix will be resulted from forward kinematics analysis. This matrix
describes the position and orientation of the {nth} link frame with respect to the base
frame {0} and is a function of joint variables (θi), as presented in Eq. 2-1.
Coordinates of the end-effector’s X-axis, Y-axis and Z-axis are described in first three
columns which together describe the orientation of tool frame relative to base
In next step joint’s axes need to be defined. Z1 represents the axis of the ring
motion corresponding to φ angle. Z2 is the axis of the first linear joint, R position,
corresponding to the radial movement of the RCM. Z3 is the axis of the first rotary
joint of the double-rotor mechanism, θ1 angle. Z4 is the axis of the second rotary
joint of the double-rotor mechanism, θ2 angle. Z5 is the axis of the second linear
the tool axis and rotation about itself, ψ angle. These joint’s axes are shown in
Figure 2-8.
Then coordinate frames must be assigned to each link, same way shown in
Figure 2-7. These coordinate frames are attached to the robot links as illustrated in
Figure 2-9, with the frames of the double-rotor mechansim shown in detail on the
37
Figure 2-8: Defining joints axes
38
Link parameters are identified and listed in Table 2-4. The ring motion angle is
All distances between axes 𝑎𝑖−1 are zero in Table 2-4, which means that each
axis intersects with its neighboring axis. This in fact simplifies the kinematic
equations. H is defined as the vertical distance between the base frame and the ring
Where, r is the distance between the RCM and the base frame and is solely
Using the data from Table 2-4 with Eq. 2-3 the following transformation matrices
can be found:
39
𝑐𝝋 𝑠𝝋 0 0
0 −𝑠𝝋 𝑐𝝋 0 0
1𝑇 = � � Eq. 2-4
0 0 1 𝐻
0 0 0 1
1 0 0 0
1 0 0 −1 𝑹
2𝑇 = � � Eq. 2-5
0 1 0 0
0 0 0 1
𝑐𝜽𝟏 −𝑠𝜽𝟏 0 0
⎡ ⎤
2 ⎢ −𝑠𝜽𝟏 /√2 −𝑐𝜽𝟏 /√2 1/√2 −𝐻 ⎥
3𝑇 = Eq. 2-6
⎢−𝑠𝜽𝟏 /√2 −𝑐𝜽𝟏 /√2 −1/√2 𝐻 ⎥
⎣ 0 0 0 1 ⎦
𝑐𝜽𝟐 −𝑠𝜽𝟐 0 0
3 𝑠𝜽𝟐 𝑐22.5 𝑐𝜽𝟐 𝑐22.5 −𝑠22.5 0
4𝑇 = � �
𝑠𝜽𝟐 𝑠22.5 𝑐𝜽𝟐 𝑠22.5 𝑐22.5 0 Eq. 2-7
0 0 0 1
1 0 0 0
4 0 𝑐22.5 −𝑠22.5 −𝒁𝑠22.5
5𝑇 = � � Eq. 2-8
0 𝑠22.5 𝑐22.5 𝒁𝑐22.5
0 0 0 1
𝑐𝝍 −𝑠𝝍 0 0
5 𝑠𝝍 𝑐𝝍 0 0
6𝑇 = � � Eq. 2-9
0 0 1 0
0 0 0 1
To find the end effector coordination in origin frame all these transformation
40
Elements of Eq. 2-10 are quite lengthy and tedious to work with. A MATLAB code
𝜑 = 𝜃1 = 𝜃2 = 𝜓 = 0.5 𝑟𝑎𝑑𝑖𝑎𝑛𝑠
𝑅=𝑍=1
have a closed-form solution, certain special cases can be solved. For instance in this
design all axes have been chosen in a way that most of the parameters are zero.
intersect at one point, based on Pieper’s work on 6 axis robots (Pieper and Roth
1969) (D. Pieper 1968). However, this work was mostly done on all revolute joints.
In this case a custom method is needed to solve the inverse kinematics since this
put into a form for which a solution is known. In geometric approach the spatial
41
geometry of the robot must be decomposed into several sub-problems to find the
equations each of which is quite lengthy. For example, the first element 𝒓𝟏𝟏 in the
𝒓𝟏𝟏 = sin 𝜑 ��sin 𝜃2 (. 271 cos 𝜃1 + .653) + .383 sin 𝜃1 (. 271 cos 𝜃1 − .653)(cos 𝜃2 − 1)�(cos 𝜓 cos 𝜃1
cos 𝜃1 − 1
− (. 271 cos 𝜃1 + .653)(. 271 cos 𝜃1 − .653)(cos 𝜃2 − 1)� �sin 𝜓 � �
2
cos 𝜃1 + 1
+ �sin 𝜓 � � + .707 cos 𝜓 sin 𝜃1 � �(. 271 cos 𝜃1 − .653)2
2
cos 𝜃1 + 1
− .383 sin 𝜃1 (. 271 cos 𝜃1 − .653)(cos 𝜃2 − 1)� �sin 𝜓 � � + .707 cos 𝜓 sin 𝜃1 �
2
cos 𝜃1 − 1
− �sin 𝜓 � � + .707 cos 𝜓 sin 𝜃1 � (sin 𝜃2 (. 271 cos 𝜃1 − .653)
2
+ .383 sin 𝜃1 (. 271 cos 𝜃1 + .653)(cos 𝜃2 − 1) + (cos 𝜓 cos 𝜃1 − .707 sin 𝜓 sin 𝜃1 )
There are 11 more equations like this to try to solve for the unknown angles and
prismatic variables. It is pretty obvious that there is no way to solve the inverse
42
kinematics of this manipulator using these types of trigonometric equations.
However, we can take advantage of the manipulator’s geometry and break the
inverse kinematics into a few sub-problems to find each joint parameter. This can be
done using the key points and axes described in the forward kinematics section.
In The approach first the key point of the mechanism which is the RCM needs to
be found. Then by having the RCM coordinate which can be directly linked to other
solve for φ and R. In next step, Z which is the distance from the tool’s frame origin to
the RCM point can be calculated. The only unknowns at this step would be 𝜃1 , 𝜃2 and
MATLAB code is available in the “Appendix B” that calculates the inverse kinematics
of this robot.
2.6 Workspace
with 30 mm in diameter which is the size of the eyeball that must be reachable by its
end-effector. The reachable workspace of the robot is the 3D sub-space that the tip
of the robot can reach with at least one orientation. The shape of reachable
workspace for this robot is found by putting the upper limit and lower limit values
Combination of the planetary ring motion and the top linear motion as the first
and second axes gives a workspace in shape of planar disc with its center located on
43
the surgical base plane and is coincident with the ring axis. This disc shapes the
locus of RCM.
The double-rotor mechanism (Axes III to VI) creates a conic workspace with two
portions based on the position of instrument tip. When tip of instrument is above
RCM which means Z value is positive the workspace is the upper right conic and
when it is below RCM and Z value is negative it is the lower left cone. This is shown
in Figure 2-11. Therefore the total reachable workspace can be found by moving the
conic workspace over this disc, which gives us the mushroom shape in Figure 2-10.
44
Figure 2-11: Double-rotor conic workspace; positive and negative sections.
2.7 Accuracy
To find the integrated precision of this robotic system the precision of each axis
must be used which are calculated in section 2.3. Now that the forward kinematics
of the robot is available, the total accuracy of the system can be discussed.
considered. Then each DOF is misplaced by the amount equal to the precision of
axis does not make up the error of the misplacement in another axis. Using forward
kinematics equation the final position and orientation of the tool tip are compared:
45
Table 2-5: Misplacement variables
1 0 0 0
0 1 0 0
𝐻𝑜𝑚𝑒 = � �
0 0 1 0
0 0 0 1
46
CHAPTER 3 : CONTROL THEORY OF THE SYSTEM
output y(t). The response y(t) is determined by the nature of the input signal u(t)
If g(t) is assumed to be the response of the system to a unit impulse at time t=0,
then the response of the system y(t) to any input u(t) can be visualized as the
response to a train of impulses that approximates the function (see Figure 3-2).
Figure 3-2: Unit impulse 𝛿(𝑡) and response to a set of individual impulses.
47
Any one of the individual response curves can be expressed as 𝑢(𝜏)𝑔(𝑡 − 𝜏),
where 𝜏 is the time that the impulse is applied (R.Leigh 2004). Then the response to
any input u(t) can be given by the convolution integral using superposition:
𝑡
𝑦(𝑡) = � 𝑔(𝑡 − 𝜏)𝑢(𝜏)𝑑𝜏 = (𝑔 ∗ 𝑢)(𝑡) Eq. 3-1
0
If U(s) and G(s) are assumed to be the Laplace transform of u(t) and g(t)
respectively, then by taking Laplace transform of both sides of this equation, we get:
Eq. 3-2
Y(s)=G(s)U(s)
This is how Laplace transform makes it much simpler to find the response to any
A closed loop system is shown in Figure 3-3 where C is the controller, G is the
plant, r is the controller input or desired value and y is the plant output and the
transfer functions of these elements are C(s), G(s), R(s) and Y(s), respectively. The
system is linear and time invariant which means the elements of transfer functions
48
The characteristics of this system are defined as:
r: Controller Input
E: Error
Then based on the Eq. 3-2 driven in previous section, the following relations can
be expressed:
G(s)C(s)
Where: H(s) = Eq. 3-4
1 + G(s)C(s)
49
3.3 Dynamic Model of a DC Motor
motors as the main elements of its control system, it’s essential to develop a linear
constant magnetic field, rotor coils that provide electromagnetic force for rotation.
The coils are powered from the commutator and the brushes. The more the
numbers of coils are the smoother the rotation gets. The geometrical characteristics
and position of the brushes (and the commutator of course) will be responsible for
changing the magnetic field of the electromagnets according to the position of the
rotor. The brushes are two metallic pieces that act like springs. On one side, they
have a piece of conductive material, usually made of carbon to stand against friction.
On the other side, they have the pin that the power supply is applied to the motor.
The brushes are pushed (by the spring action of the metallic part) against the
commutator. The commutator is a metallic ring, also conductive and is fixed on the
shaft of the motor. The shaft transfers the rotation to the rotating elements of the
mechanism.
50
Also a DC motor can be modeled as an electric circuit of the armature with
mechanical load.
The rotor and the shaft are assumed to be rigid. The parameters of this model
v: applied voltage
i: armature current
vb : EMF voltage
θ: angular position
ω: angular velocity
The input is the armature voltage v in Volts (driven by a voltage source). Measured
variables are the angular velocity of the shaft ω in radians per second, and the shaft
angle θ in radians.
51
For the armature electric circuit the following equation can be written based on
Kirchhoff law:
𝑑𝑖(𝑡)
𝑣(𝑡) = 𝑅𝑖(𝑡) + 𝐿 + 𝑣𝑏 (𝑡) Eq. 3-5
𝑑𝑡
due to its motion inside the magnetic field, as stated by Faraday’s law, R is the
The back electromotive force (emf), 𝑣𝑏 , is related to the angular velocity by:
𝑑𝜃
𝑣𝑏 (𝑡) = 𝑘𝑏 𝜔(𝑡) = 𝑘𝑏 Eq. 3-7
𝑑𝑡
Where 𝑘𝑏 is the back-EMF constant. Also, for the power in this system the
Also, the following equation can be written based on the Newton’s law:
Now that the equations of the system are derived, it’s time to take Laplace
transform of these equations. Using the Laplace transform, Eq. 3-5 and Eq. 3-7 can
be written as:
52
and 𝑉𝑏 (𝑠) = 𝐾𝛺(𝑠) Eq. 3-10
Where, s denotes the Laplace operator. Also, for Eq. 3-8 we get:
From combination of Eq. 3-9 and Eq. 3-10, 𝐼(𝑠) can be expressed as:
𝑉(𝑠) − 𝐾𝑠𝜃(𝑠)
𝐼(𝑠) = Eq. 3-12
𝐿𝑠 + 𝑅
Since T(s) is also equal to K I(s), combining Eq. 3-11 and substituting I(s) from
𝑉(𝑠) − 𝐾𝑠𝜃(𝑠)
𝐽𝑠 2 𝜃(𝑠) + 𝐵𝑠𝜃(𝑠) + 𝑇𝑙 (𝑠) = 𝐾 Eq. 3-13
𝐿𝑠 + 𝑅
From this equation, the transferred functions of shaft angle 𝜃(𝑠) and angular
𝑇(𝑠) − 𝑇𝑙 (𝑠)
𝜃(𝑠) = Eq. 3-14
𝑠(𝐽𝑠 + 𝐵)
𝑇(𝑠) − 𝑇𝑙 (𝑠)
𝛺(𝑠) = Eq. 3-15
𝐽𝑠 + 𝐵
53
Assuming the free load case (Tl (s)=0), Eq. 3-13 for the DC motor is shown in the
The resulted transfer function for the input voltage, V(s), to the output angle,
𝜃(𝑠) 𝐾
𝐺𝑎 (𝑠) = = Eq. 3-16
𝑉(𝑠) 𝑠[(𝐿𝑠 + 𝑅)(𝐽𝑠 + 𝑏) + 𝐾 2 ]
From the block diagram in Figure 3-6, it is easy to see that the transfer function
from the input voltage, V(s) , to the angular velocity, ω(s), is:
𝛺(𝑠) 𝐾
𝐺𝑣 (𝑠) = = Eq. 3-17
𝑉(𝑠) (𝐿𝑠 + 𝑅)(𝐽𝑠 + 𝑏) + 𝐾 2
Now that the position and velocity transfer functions, 𝐺𝑎 (𝑠) and 𝐺𝑣 (𝑠) are
derived, they can be used to analyze the behavior of this control system with real
54
3.4 Motor (V) Transfer Function Analysis
The DC brushed motor (V) of the robot is chosen for further analysis. The
J= 0.13*10-7 (Kgm2)
R=27.4 (Ohm)
b=10-7 (Nms/rad)
L=0.399*10-3 (H)
K=0.01 (Nm/A)
0.01
𝐺𝑎 (𝑠) =
5.187 ∗ 10−12 𝑠 3 + 3.562 ∗ 10−7 𝑠 2 + 0.1027 ∗ 10−3 𝑠
0.01
𝐺𝑣 (𝑠) =
5.187 ∗ 10−12 𝑠 2 + 3.562 ∗ 10−7 𝑠 + 0.1027 ∗ 10−3
The above transfer functions can be entered into the Matlab code which is
presented in “Appendix C” to plot the responses for the velocity and angle position.
This code takes the position and velocity transfer functions and returns the plots of
55
3.4.1 Step and Impulse Responses
The step and impulse responses of the system are the first characteristic of the
system to be analyzed. The step response is the response of the system to a unit
input at t>=0 which here means to connect the motor to a power source that
produces one volt. On the other hand, the impulse response is an infinite input at
time t=0 and zero otherwise. Note that the Laplace transform of the step function is
equal to one over s, [V(s)step=1/s], while the Laplace transform of the impulse
function is one, [V(s)impulse=1]. Also, by definition, the derivation of the step function
The step and impulse responses of the position transfer function are shown in
Figure 3-7.
Step Response
From: Voltage To: Angle
3
Position (rad)
0
0 0.005 0.01 0.015 0.02 0.025 0.03
Time (seconds)
Impulse Response
From: Voltage To: Angle
100
Position (rad)
50
0
0 0.005 0.01 0.015 0.02 0.025 0.03
Time (seconds)
Figure 3-7: The step and impulse responses of the position transfer function.
After giving the input of unit voltage to the motor the step response plot is rising
as it is expected due to increase in angular position when the motor starts rotating.
56
However, the response of the position to the impulse input is converging to 100 rad,
which can be validated by taking the inverse Laplace transfer of 𝐺𝑎 (𝑠) when V(s)=1,
Also, same responses are plotted for velocity transfer function which is shown in
Figure 3-8.
Step Response
From: Voltage To: Velocity
100
Velocity (rad/s)
50
0
0 0.005 0.01 0.015 0.02 0.025 0.03
Time (seconds)
Impulse Response
4
x 10 From: Voltage To: Velocity
3
Velocity (rad/s)
0
0 0.005 0.01 0.015 0.02 0.025 0.03
Time (seconds)
Figure 3-8: The step and impulse responses of the velocity transfer function.
The step response of velocity shows the same behavior as the impulse response
of angular position which is expected since the velocity is related to angular position
by Ω(s) = sθ(s) and the Laplace transfer of step function is related to the Laplace
𝛺(𝑠) sθ(s)
𝐺𝑣 (𝑠) = =
V(s)step s V(s)impulse
57
3.4.2 Frequency Domain Analysis
In order to analyze the frequency response of the system, the “Bode diagram” is
Bode Diagram
From: Voltage To: Angle
200
Magnitude (dB)
-200
-400
-90
Phase (deg)
-180
-270
1 2 3 4 5 6 7
10 10 10 10 10 10 10
Fre que ncy (rad/s )
Bode Diagram
From: Voltage To: Velocity
50
Magnitude (dB)
System: Gv
0 I/O: Voltage to Velocity
Frequency (rad/s): 43.8
-50 Magnitude (dB): 39.7
-100
0
Phase (deg)
-90
-180
1 2 3 4 5 6 7
10 10 10 10 10 10 10
Fre que ncy (rad/s )
Figure 3-9: The position and velocity responses of the system to the voltage
frequency.
A Bode Plot is a useful tool that shows the gain and phase response of a given
linear time invariant system for different frequencies. The frequency of the bode
plots are plotted against a logarithmic frequency axis. Every tick mark on the
frequency axis represents a power of 10 times the previous value. The bode
58
The Bode phase plot measures the phase shift in output with respect to input in
degrees. The reason to observe the frequency domain of the system is to make sure
there would not be any critical frequency to create excitation in the system.
As it can be seen in these diagrams, the bode magnitude goes down as the
frequency increases for both angle and velocity responses. To evaluate the result,
for example for frequency between 10 and 100 the magnitude of the velocity
following calculations:
0.01
𝑑𝐵 = 20𝑙𝑜𝑔𝐺𝑣 (𝑠) = 20 𝑙𝑜𝑔 ≅ 20𝑙𝑜𝑔100 = 40 𝑑𝐵
5.187 ∗ 10−12 𝑠 2 + 3.562 ∗ 10−7 𝑠 + 0.1027 ∗ 10−3
Also, from these diagrams it is assured that there is no critical frequency and
Extra space
Now that the behavior of the system has been observed through step, impulse
and frequency responses, it’s time to add a controller. First, a simple proportional
59
In this system, 𝜃𝑑 (𝑠) is the desired angular position, Kp is the proportional gain
and 𝜃(𝑠) is the actual position. The error is defined as the difference of actual
𝑈(𝑠) = 𝐾𝑝 ∗ 𝐸(𝑠)
𝐾𝑝. 𝐺(𝑠)
𝐻(𝑠) = Eq. 3-18
1 + 𝐾𝑝. 𝐺(𝑠)
Then, for the motor 5 that is being discussed in this chapter, the position transfer
0.01𝐾𝑝
𝐻𝑎 (𝑠) = Eq. 3-19
5.187 ∗ 10−12 𝑠 3 + 3.562 ∗ 10−7 𝑠 2 + 0.0001027𝑠 + 0.01𝐾𝑝
If a big value is been chosen for proportional gain it will result in a large change
in the output for a given change in the error. If the proportional gain is too high, the
system can become unstable. In contrast, a small gain results in a small output
response to a large input error, and a less responsive or less sensitive controller. If
the proportional gain is too low, the control action may be too small when
60
responding to system disturbances. The goal is to find the best value for Kp that
minimizes the error and the response time. To find the best Kp, the step response of
Step Response
From: Desired Angle To: Angle
1.4
1.2
1
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (seconds)
For the first plot the value of Kp assumed to be equal one, Kp=1, the result shows
a quite small overshoot while the system very slowly approaches to the desired
value. So, the proportional gain must be increased to give a faster approach.
The next response is plotted for Kp=5 as shown in Figure 3-12. Here, the result
shows a faster approach to the desired response while gives a bigger over shoot at
61
Step Response
From: Desired Angle To: Angle
1.4
1.2
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (seconds)
To further the study of the effect of proportional gain, the results for Kp=10 and
Step Response
From: Desired Angle To: Angle
1.5
1
Position (rad)
0.5
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (seconds)
62
Step Response
From: Desired Angle To: Angle
1.8
1.6
1.4
1.2
Position (rad)
1
0.8
0.6
0.4
0.2
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (seconds)
The proportional controller does not deliver a proper performance for the high
precision robot which is the interest of this search. Therefore a more advanced
the most common controller used in many industries. The structure of this three
any variation in characteristics of the plant of any form. It calculates its output based
on the measured error and the three controller gains; proportional gain Kp, integral
gain Ki, and derivative gain Kd. The proportional gain simply multiplies the error by
63
a factor Kp. This reacts based on how big the present error is. The integral term is a
multiplication of the integral gain and the sum of the recent errors. The integral
term helps in getting rid of the accumulation of past errors to eliminate the residual
steady state error and causes the system to catch up with the desired set point. The
derivative controller determines the reaction to the rate of which the error has been
changing. It helps with prediction of future errors based on current rate of change,
The final output of the PID controller (U) is calculated using the following
equation:
𝑑
𝑢(𝑡) = 𝐾𝑝 ∗ 𝑒(𝑡) + 𝐾𝐼 � 𝑒(𝑡)𝑑𝑡 + 𝐾𝐷 𝑒(𝑡) Eq. 3-20
𝑑𝑡
64
3.6.1 Transfer Function of PID Controller
𝐾𝐼
𝑈(𝑠) = 𝐾𝑝 𝑒(𝑠) + 𝑒(𝑠) + 𝐾𝐷 𝑠𝑒(𝑠)
𝑠
𝐾𝐼
𝑈(𝑠) = �𝐾𝑝 + + 𝑠 𝐾𝐷 � 𝑒(𝑠) = 𝐶(𝑠)𝑒(𝑠)
𝑠
Where:
𝐾𝐼
𝐶(𝑠) = 𝐾𝑝 + + 𝑠 𝐾𝐷
𝑠
C(s) is called the PID controller transfer function. Combining this result with
Eq. 3-4, the position transfer function of PID controller is found as follows:
With this closed loop transfer function the behavior of the system can be
analyzed in order to find the best proportional, integral and derivative gains. This
process is called manual tuning. There are a couple of strategies on how PID can be
tuned; this includes trial and error tuning method, Ziegler-Nichols tuning method
(Meshram and Kanojiya 2012) etc. The goal is to minimize settling time with no over
shoot. In this case the best optimum result is obtained by trial and error.
65
3.6.2 Proportional Gain Effect
To analyze the effect of proportional gain in PID system, the integral and
derivative gains assumed to be Ki=100 and Kd=1. The results are plotted for Kp=1,50
and 500.
Figure 3-16: The PID controller step response for Kp=1 (left) and Kp=50 (right).
1.2
1
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-3
Time (seconds) x 10
These plots show even though the system is sensitive to the value of Kp, but the
result does not significantly change when Kp value is changed from 1 to 50 and even
66
to 500. As it was discussed earlier, increasing Kp results in shorter settling time and
larger overshoot.
To study the effect of integral gain in PID system, the proportional and derivative
gains assumed to be Kp=100 and Kd=1. The results are plotted for Ki=1 and 100.
1.2
1
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-3
Time (seconds) x 10
1.2
1
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
-3
Time (seconds) x 10
67
In this case, even with a big change in value of Ki from 1 to 100, it does not show
To study the effect of derivative gain in PID system, the proportional and integral
gains assumed to be Kp=100 and Ki=10. The results are plotted for Kd=0, 1 and 2.
Figure 3-20: The PID controller step response for Kd=0 (left) and Kd=1 (right).
1.2
1
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
Time (seconds)
As it can be seen in these diagrams, the response of the system is highly sensitive
to the value of Kd that even a very small changes in this value have a big impact on
quality of response.
68
3.6.5 PID Controller Gain Values
The best response for this system is obtained with Kp=200, Ki=100 and Kd=0.7.
As it can be seen in Figure the response reached with almost no over shoot (0.01%)
1.2
1
Position (rad)
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
-3
Time (seconds) x 10
These are the best PID gains could be found for this specific system. If Kp is
chosen to be smaller than this value, the system gets to the set point slower and if a
bigger value is selected for Kp the over shoot would be resulted. However the system
is not pretty much sensitive to the value of Ki, still this value of Ki gives the faster
result. If the value of Kd is chosen to be slightly less than this value, like 0.6, it will
result in steady state error, and if it is chosen to be slightly more than this value, like
69
CHAPTER 4 : CONTROLLER ARCHITECTURE
In this chapter the control system of the robot with its all elements and features
Figure 4-1.
Figure 4-1: The microsurgery robotic system with its motors and controllers
4.1 Encoder
The incremental encoder is used for position and speed evaluation (Kafader
2014). Incremental digital encoder is one of the most common sensor type for
70
measuring speed and position in micro-drive tools. Many various designs with
different working principles are using this kind of encoder, but all of them basically
subdivide one revolution into a number of steps (increments), and send signal
generally supply square-wave signals in two channels, A and B, which are offset by
In this robot the encoder pulse for motors (I) to (V) is 512 cpt (counts per turn)
and 128 cpt for motor (VI). The high resolution provided by these encoders gives
the robot the ability of the micro positioning control. The encoder part and its
connection cable for motor (I) are shown in Figure 4-2 by red arrows.
per revolution, given in cpt (counts per turn). By counting the number of state
transitions in both channels, the real resolution is four times higher. These states
71
are called quadrature counts or quad counts (qc) and are used as the position unit in
The higher the resolution, the more accurately the position can be measured,
and the more accurately the speed can be derived from the change in position as a
function of time. The direction of rotation follows from the sequence of the signal
pulses of channel A and B: Channel A leads in one direction, channel B leads in the
from a defined reference position e.g. home position. The unit of positioning is quad
counts (qc).
Speed evaluation is calculated from the position change per sampling interval. In
this case, the natural unit of speed is quad counts per millisecond. Therefore, speed
can only be expressed in steps of 1qc/ms, which is usually translated in the more
72
common and practical unit rpm. Acceleration and deceleration values are given in
A high encoder pulse count N not only results in a high position resolution, but
also improves the control dynamics. Feedback information about a change in the
actual position value is obtained more quickly and the control circuit can initiate
All six controllers in this robotic system are modular constructed digital
with a power range from 1 to 700 watts. A number of operating modes provides
The controller device used in this robotic control system is a CANopen motion
Network, or CAN for short, is a common message based protocol used in control
73
On a physical level, each device - called a node and given its distinctive address
(node number) - is mounted in parallel on the bus (Kohanbash 2014). While there is
no standard way of wiring a CAN bus there are always two wires/signals, and often
another two. The first two are CAN+ (or CAN_H) and CAN- (or CAN_L) which are
used for sending the CAN data. The next two wires are for power and ground that
are connected to devices on the network. All of CAN devices can be wired in series
creating a large bus of devices. The bus network needs proper termination on both
sides to avoid signal reflections, which means having a 120 Ohm resistance at each
end. A physical layout of this robot with 6 nodes is shown in Figure 4-4.
Figure 4-4: Physical layout of the six motor controllers CAN bus.
Signal bits require a certain time to spread over the whole bus. Therefore, the
transmission rate depends on the bus length. Typically in robotics each CAN
network will have one “master” node which is typically a computer or micro-
controller that requests information or listens for alerts/emergencies from all of the
CAN devices on the network. In this robot master node is node 1 which is the
74
controller of motor I. This node sends and receives signals from other 5 controllers
There are two sets of controller used in this design. The first one is “EPOS2 24/2
for Maxon EC motors” which is used for first four motors and the other one is
“EPOS2 24/2 for Maxon DC motors” which is used for motor (V) and (VI).
The EC motor controller with its connection ports is shown in Figure 4-5. This is
the controller which is used to control first four motors of the robot.
J1 connectors are for supply and control signals which its pins 12 and 13 are
wired to the power supply. J2 connectors are communication connector that is used
for CAN bus communication (Maxon, Application Notes Collection 2013). This
connectors are used for send/receive control signals. Pin 1, 2 and 5 are wired to
CAN high bus line, CAN low bus line and ground line, respectively. J8 is motor/hall
75
sensors connector. J9 is for the encoder cable connection and finally J15 is the USB
JP1 is CAN node identification and is used to set CAN ID or node address (Maxon,
Application Notes Collection 2013). The CAN ID is set with DIP switches 1 to 4 as
shown in Figure 4-6. With these four binary switches, 15 addresses (1 to 15) may be
coded.
The CAN ID results in the summed values of “ON” DIP switch addresses. For
instance DIP switch setting to identify the six nodes of this robotic system must be
like the way shown in the Table 4-1. DIP switches 5 and 6 do not have any impact on
76
Table 4-1: DIP switch setting for robotic system with 6 nodes.
1 20
2 21
3 20+21
4 22
5 20+22
6 21+22
DIP switch 5 is to detect CAN bit rate, and DIP switch 6 is CAN bus termination.
The CAN bus must be terminated at both ends by a termination resistor of 120 Ω,
Also there is a Status LED on the controller board to display the current status of
the controller as well as possible errors. Green LED shows the operating status
77
while Red one indicates errors. Figure 4-7 shown the controller (III) with the Green
LED “ON”.
Figure 4-7: Controller (III) in operating status with the Green LED “ON”.
The DC motor controller with its connections is shown in Figure 4-8. This is the
78
The J1, J2, J15 and JP1 connectors are the same as discussed in the previous
encoder through Adapter encoder connector as shown in the Figure 4-9 with motors
Figure 4-9: Motor/encoder connector (left) and DC brushed controllers (V) & (VI)
The existing robot control system structures can be roughly divided into two
was already mentioned, this robotic system benefits from CAN as its control system
controller installed on the related joint of the robot and connected to the master
79
By placing the controllers on the robot arm, significant reductions in cabling
could be achieved. In particular, the thick cables containing multiple wires for motor
power and sensor feedback could be replaced by thin network and power cables.
The other reason that makes CAN network the best option for this case is that it
communicate with one another. The advantage of this is that electronic control units
(ECUs) can have a single CAN interface rather than analog and digital inputs to every
device in the system which decreases overall cost and weight of the system.
Each of the devices on the network has a CAN controller and is therefore intelligent.
All devices on the network see all transmitted messages. Each device can decide if a
CAN networks with minimal impact. Additional nodes can be added without
Also, CAN system is very well capable of handling errors. Frames with errors are
disregarded by all nodes, and an error frame can be transmitted to signal the error
to the network. Global and local errors are differentiated by the controller, and if
too many errors are detected, individual nodes can stop transmitting errors or
80
4.4 Easy Positioning System (EPOS) Controller Software
The EPOS Studio is the graphical user interface for all our Maxon controllers
Essentially, the controller contains two devices, a PLC and a motion controller,
that are internally linked by a CAN bus. The EPOS Studio on the computer
communicates with these two devices by means of a USB2.0 serial connection as the
default communication. The PLC part is the master of the device. It controls the
process flow and all slave units in the network; one of the slave units being the built-
Figure 4-10: Schematic overview of the controller with external connections and
communication.
To get connected to the controllers through the EPOS Studio on the computer,
first we need to wire all 6 controllers together as it was discussed in section 4.2.
81
Then to connect this series of controller to the software, one of them need to be
chosen (usually the first motor controller) to be connected through USB port to the
To get started, we need to create a new project. The controller hardware of the
Figure 4-11: The EPOS Studio screen with the Workspace tab open.
The Communication tab shows the setup of the project from a communication
point of view. The nice thing about the EPOS Studio is that we can directly
communicate with any device in the network once we get connected to them. The
first node connected to the computer through USB, usually is quickly recognized by
the software. To find the rest of nodes on the network, there is a scanning device
available on communication tab which scans for up to 127 nodes, but once it found
82
the rest of our 5 nodes, we can quit the scanning project and add those nodes to the
network.
Now that we have configured the system the EPOS motion controller is now
ready for further exploration. The tools for the exploration of the motion controller
The standard positioning mode for most applications is the “Profile Position
Mode”. To execute a point-to-point movement, we need to use this option and enter
the increments from a defined reference position. The unit of positioning is quad
83
Figure 4-13: Different operating modes under the “Tools tab”.
The required calculation in order to convert the motion of six axes of the robot
into the qc unit (which is the acknowledgeable unit by the EPOS Studio program) is
For example to drive the robot arm around the ring for 45 degree, the target
position for node 1 (the first Axis- rotating axis) would be 116736 “qc” which is
resulted from:
For the second axis which is a linear one, to move the arm 20 mm along its axis
84
Figure 4-14: Data entry for 45 degree motion of node 1 (around the ring) in profile
position mode.
85
Table 4-2: Data sheet to convert the motion of six axes to quad count (qc) unit.
86
180°
To move an object to a specified final position, the object must first be set in
motion (accelerated), moved and then decelerated. Finally, the specified position
must be maintained against any interfering forces. All these operations require a
motor torque that must be made available to the position control system. Ultimately,
position control relies on the lower level current control loop for creating the
necessary torque.
Here what happens is the master system sends motion commands to the motion
controller. They are processed by the path generator that calculates intermediate
positions at given time intervals (1ms on EPOS systems) on the path to reach the
final position. These set values are transmitted to the position control loop, which,
by comparison with the actual position, determines the set (or demand) values for
the current controller. In the EPOS the current control loop has a cycle time of 0.1
ms or 10 kHz.
count (qc). As soon as the position is more than 1 qc off the target position control
It must be noted that, the system reaction on deviations from the required
positions depends on the torque and speed capabilities of the system and also on the
friction, mass inertias and the set feedback and feed-forward parameters. The
controller parameter values have been established during the tuning process based
87
The other important position accuracy aspect is how the target position is
reached. Again, the results depend on how the system was tuned and which aspect
of accuracy was given the most weight during the tuning: e.g. no position overshoot
the device in high precision microsurgery, obviously the tuning as it was discussed
88
CHAPTER 5 : CONCLUSION
In this chapter, the main conclusions are briefly discussed and recommendations
5.1 Conclusions
In this thesis work, first it was explained how robotic surgery can benefit
ophthalmic surgery, and then different ophthalmic surgeries were studied in order
This system features a 350 mm ring that can hold up to three different instrument
manipulators and provide a rotary motion about the optic axis of the eye, φ, a linear
motion in the radial direction that determines the entry point to the eye, r, the
double-rotor mechanism that provides a remote center of motion for the surgical
manipulator at the entry point to the eye with two lateral rotations about the entry
point to the eye θ1 and θ2, a stroke motion along the instrument axis, Z, and a
This design is compact and offers wide range of motion and is capable of adjusting
10 μm. Also, forward and inverse kinematics of the system were defined.
89
Since the robotic surgery system discussed in this thesis is composed of 6 DC
motors as the main elements of its control system, a linear model to analyze the
To run 6 DOF system, six miniature DC motors with PID controllers were used. To
perform the motion control of the system, first the dynamic behavior of a DC motor
was studied and modeled and then the position and velocity transfer functions were
transfer functions, the control diagrams were plotted and used to study the
system and the process of finding the best proportional gain to give the optimum
tune the system, the same process was repeated to find the effect of integral and
derivative gains. Eventually, the PID control system with the optimum gains was
Controller Area Network (CAN) and the controller software were discussed next. To
each motor controller as a node on the system and then it must be properly
connected to the rest of nodes on network. The method of node number assignment
90
The EPOS Studio is the graphical user interface that can provide connection with
controllers in order to run the motors. This program offers a wide range of control
options and provides a very well defined master for this robotic system.
quadratic counts (qc). This unit is the smallest motion achievable by this design that
pretty much guarantees the required precision for intended surgeries. The
The surgical robot which is the purpose of this thesis is fully assembled and
motorized and ready to function at this point. The master control system offers
As mentioned earlier in the body of this thesis, this robot is the slave robot,
which does the operation. The input to the robot should be generated by a master
robot. The main part of the future work is to design a master robot and the
communication method between these two robots. Although designing such system
procedures in this work can be used as a baseline for designing an efficient and
Since the forward and inverse kinematics of the slave robot is fully determined,
the mapping between surgeon moves on the master console and slave robot
91
The control system that receives the input from the master console and translate
it to slave robot movement can be designed in a way to scale the surgeon’s hand
The next step would be design verification and validation based on regulatory
agents standards such as FDA. The results of the verification and validation tests can
92
BIBLIOGRAPHY
Ang, W. T., C. N. Riviere, and P. K. Khosla. "An active hand-held instrument for
enhanced microsurgical accuracy." MICCAI, 2000: 878 – 886.
Babuska, Robert, and Stefano Stramigioli. "Matlab and Simulink for Modeling and
Control." 1999.
Butter, M., et al. Robotics for healthcare, Final report. TNO Informatie en
Communicatietechnologie: TNO, The Netherlands Vilans, The netherlands
EuroAct, Japan FhG ISI, Germany VTT, Finland, 2008.
Craige, J. J. Introduction to Robotics: Mechanics and Control. 3rd. New Jersey: Pearson
Prentice Hall, 2004.
93
Dalvand, Mohsen Moradi, and Bijan Shirinzadeh. "Motion control analysis of a
parallel robot assisted minimally invasive surgery/microsurgery system
(PRAMiSS)." Robotics and Computer-Integrated Manufacturing 29, no. 2
(2013): 318-327.
Donelson, S. M., and C. C. Gordon. Matched anthropometric database for U.S. Marine
corps Personnel: summery and statistics. Newton Centre, MA: Geo Centers Inc.,
1996.
Engel, D., J. Raczkowsky, and H. Worn. "A safe robot system for craniofacial surgery."
IEEE International Conference on and Automation (ICRA) 2 (2001): 2020-
2024.
Farsi, M., K. Ratcliff, and M. Barbosa. "An introduction to CANopen." Computing &
Control Engineering Journal, Volume 10, 1999.
Fei, B., W. Sing Ng, S. Chauhan, and C. K. Kwoh. "The safety issues of medical
robotics,." Reliability Engineering & System Safety 73, no. 2 (2001): 183-192.
Ghani, Khurshid R., Quoc-Dien Trinh, Jesse Sammon, Wooju Jeong, Ali Dabaja, and
Mani Menon. "Robot-assisted urological surgery: Current status and future
perspectives." Arab Journal of Urology 10 (March 2012): 17-22.
94
Goroll, Allan H., and Albert G. Mulley. Primary Care Medicine: Office Evaluation and
Management of the Adult Patient. 6th. Philadelphia: Lippincott Williams &
Wilkins, 2012.
Gupta, P.K., P.S. Jensen, and E. de Juan. "Surgical forces and tactile perception during
retinal microsurgery." Medical Image Computing and Computer-Assisted
Intervention, 1999: 1218-1225.
Hendrix, Ron. Robotically assisted eye surgery: A haptic master console. Eindhoven:
Ipskamp Drukkers, TU/e, 2011.
Higson, G. R. Medical Device Safety: The Regulation of Medical Devices for Public
Health and Safety. 1st. London: The Institute of Physics (IoP), 2001.
Ida, Yoshiki, Naohiko Sugita, Takashi Ueta, Yasuhiro Tamaki, Keiji Tanimoto, and
Mamoru Mitsubishi. "Microsurgical robotic system for vitreoretinal surgery."
International Journal of Computer Assisted Radiology and Surgery 7, no. 1
(January 2012): 27-34.
Jagtap, A. D., and C. N. Riviere. "Applied force during vitreoretinal microsurgery with
handheld instruments." 26th Annual IEEE International Conference of the
Engineering in Medicine and Biology Society. 2004. 2771-2773.
95
Fall Meeting of the Biomedical Engineering Society BMES/EMBS Conference.
1999.
Kazazides, Peter, and Paul Thienphrapa. "Centralized processing and distributed I/O
for robot control." Technologies for Practical Robot Applications. IEEE
International Conference, 2008. 84-88.
Kohanbash, David. "CAN bus (CANopen & CiA) for Motor Control." 2014.
Kumar, R., et al. "Preliminary experiments in cooperative human/robot force control
for robot assisted microsurgical manipulation." IEEE International Conference
on Robotics and Automation (ICRA) 1 (2000): 610-617.
Lynch, Patrick J. "Medical Illustrator." Lateral eye and orbit anatomy with nerves.
2006.
96
Maxon. CANopen Basic Information. Maxon Corporation, 2008.
Meenink, Thijs. Vitreo-retinal eye surgery robot: sustainable precision, PhD dess.
Eindhoven: Ipskamp Drukkers, 2011.
Meshram, P.M., and R.G. Kanojiya. "Tuning of PID controller using Ziegler-Nichols
method for speed control of DC motor." Advances in Engineering, Science and
Management (ICAESM), 2012.
ORBIS Cyber-Sight. Manual Small Incision Cataract Surgery. New York: ORBIS
International, 2006.
Pieper, D., and B. Roth. "The Kinematics of Manipulators Under Computer Control."
Proceedings of the Second International Congress on Theory of Machines and
Mechanisms 2 (1969): 159-169.
Pitcher, John D., Jason T. Wilson, Tsu-ChinTsao, Steven D. Schwartz, and Jean-Pierre
Hubschman. "Robotic Eye Surgery: Past, Present, and Future." Computer
Science Systems Biology, Published Online, 2012: doi:10.4172/jcsb.S3-001.
97
Shahinpoor, M., and S. Gheshmi. Robotic Surgery: Smart Materials, Robotic Structures,
and Artificial Muscles. New York and London: Taylor and Francis Publishers,
PAN Stanford Publishing, 2014.
Tameesh, M. K., R. R. Lakhanpal, and et al. "Retinal vein cannulation with prolonged
infusion of tissue plasminogen activator (t-PA) for the treatment of
experimental retinal vein occlusion in dogs." American Journal of
Ophthalmology 138, no. 5 (2004): 829-839.
Taylor, R. H., et al. "Taming the bull: safety in a precise surgical robot." Fifth
International Conference on Advanced Robotics, ICAR, 1991: 865-870.
98
APPENDIX A: MATLAB CODE FOR FORWARD KINEMATICS
clear all
clc;
syms Phi Psi Th1 Th2 R Z H
T10=[cos(Phi) sin(Phi) 0 0;
-sin(Phi) cos(Phi) 0 0;
0 0 1 H;
0 0 0 1]
T21=[1 0 0 0;
0 0 -1 R;
0 1 0 0;
0 0 0 1]
T32=[cos(Th1) -sin(Th1) 0 0;
-sin(Th1)*cosd(45) -cos(Th1)*cosd(45) cosd(45) -H;
-sin(Th1)*cosd(45) -cos(Th1)*cosd(45) -cosd(45) H;
0 0 0 1]
T43=[cos(Th2) -sin(Th2) 0 0;
sin(Th2)*cosd(22.5) cos(Th2)*cosd(22.5) -sind(22.5) 0;
sin(Th2)*sind(22.5) cos(Th2)*sind(22.5) cosd(22.5) 0;
0 0 0 1]
T54=[1 0 0 0;
0 cosd(22.5) -sind(22.5) -Z*sind(22.5);
0 sind(22.5) cosd(22.5) Z*cosd(22.5);
0 0 0 1]
T65=[cos(Psi) -sin(Psi) 0 0;
sin(Psi) cos(Psi) 0 0;
0 0 1 0;
0 0 0 1]
T60=T10*T21*T32*T43*T54*T65
function T60=F_Kine(Phi,Th1,Th2,Psi,r,Z)
Phi=deg2rad(Phi);
Th1=deg2rad(Th1);
Th2=deg2rad(Th2);
Psi=deg2rad(Psi);
H=1;
R=H+r;
T10=[cos(Phi) sin(Phi) 0 0;
-sin(Phi) cos(Phi) 0 0;
0 0 1 H;
0 0 0 1];
T21=[1 0 0 0;
0 0 -1 R;
0 1 0 0;
99
0 0 0 1];
T32=[cos(Th1) -sin(Th1) 0 0;
-sin(Th1)*cosd(45) -cos(Th1)*cosd(45) cosd(45) -H;
-sin(Th1)*cosd(45) -cos(Th1)*cosd(45) -cosd(45) H;
0 0 0 1];
T43=[cos(Th2) -sin(Th2) 0 0;
sin(Th2)*cosd(22.5) cos(Th2)*cosd(22.5) -sind(22.5) 0;
sin(Th2)*sind(22.5) cos(Th2)*sind(22.5) cosd(22.5) 0;
0 0 0 1];
T54=[1 0 0 0;
0 cosd(22.5) -sind(22.5) -Z*sind(22.5);
0 sind(22.5) cosd(22.5) Z*cosd(22.5);
0 0 0 1];
T65=[cos(Psi) -sin(Psi) 0 0;
sin(Psi) cos(Psi) 0 0;
0 0 1 0;
0 0 0 1];
T60=T10*T21*T32*T43*T54*T65;
end
100
APPENDIX B: MATLAB CODE FOR INVERSE KINEMATICS
Phi=atan2(Px-Pz*r13/r33,Py-Pz*r23/r33);
R=sqrt((Px-Pz*r13/r33)^2+(Py-Pz*r23/r33)^2);
Z=Pz/r33;
Rz=[cos(Phi) -sin(Phi) 0 0;
sin(Phi) cos(Phi) 0 0;
0 0 1 0;
0 0 0 1];
Th2=2*asin(sqrt((r23p+r33p-1)/0.41421356));
A=(0.35355339+0.35355339*cos(Th2));
B=0.38268343*sin(Th2);
C=r13p;
E=(1-sin(Th2/2)^2);
F=0.2705980501*sin(Th2);
D=r23p-0.7071067812*sin(Th2/2)^2;
Sine_Th1=(2*B*D-B*E+E*C)/(2*F*B+A*E);
Cosine_Th1=C/B-A/B*Sine_Th1;
Th1=atan2(Sine_Th1,Cosine_Th1);
Cosine_Psi=((cos(Th1)*cos(Th2)-
0.92387953*sin(Th1)*sin(Th2))*r11p...
+(0.27059805*sin(Th2)+0.65328148*cos(Th1)*sin(Th2)+0.70710678*cos(Th2)*
sin(Th1))*r21p...
+(0.27059805*sin(Th2)-0.65328148*cos(Th1)*sin(Th2)-
0.70710678*cos(Th2)*sin(Th1))*r31p);
Sine_Psi=((0.14644661*sin(Th1)-0.92387953*cos(Th1)*sin(Th2)-
0.85355339*cos(Th2)*sin(Th1))*r11p...
+(2.4142136*sin(0.5*Th1)^2*sin(0.5*Th2)^2-1.0*sin(0.5*Th1)^2-
1.7071068*sin(0.5*Th2)^2-0.65328148*sin(Th1)*sin(Th2)+1.0)*r21p...
101
+(sin(0.5*Th1)^2-
2.4142136*sin(0.5*Th1)^2*sin(0.5*Th2)^2+0.70710678*sin(0.5*Th2)^2+0.653
28148*sin(Th1)*sin(Th2))*r31p);
Psi=atan2(Sine_Psi,Cosine_Psi);
Phi=Phi*180/pi;
Th1=Th1*180/pi;
Th2=Th2*180/pi;
Psi=Psi*180/pi;
disp(' Phi Th1 Th2 Psi r Z')
disp([ Phi Th1 Th2 Psi R Z])
end
102
APPENDIX C: MATLAB CODE TO PLOT RESPONSES OF THE DC MOTOR
SYSTEM
function Arezoo=PIDC(Kp,Ki,Kd,t)
J=0.13*10^-7; % kgm2 %
b=10*10^-8; % Nms/rad %
K=0.01; % Nm/A %
R=27.4; % Ohm %
L=0.399*10^-3; % H %
s = tf([1 0],1);
Gv = K/((L*s + R)*(J*s + b) + K^2);
Ga = Gv/s;
Gv.InputName = 'Voltage';
Gv.OutputName = 'Velocity';
Ga.InputName = 'Voltage';
Ga.OutputName = 'Angle';
% G = [Gv; Ga];
G = ss(Ga);
set(G,'c',[0 1 0; 0 0 1],'d',[0;0],'OutputName',{'Velocity';'Angle'});
% figure(1);
% subplot(2,1,1); step(Ga, 0.03);title('Step
Response','FontSize',10);ylabel('Position
(rad)','FontSize',10);xlabel('Time','FontSize',10);
% subplot(2,1,2); impulse(Ga, 0.03);title('Impulse
Response','FontSize',10);ylabel('Position
(rad)','FontSize',10);xlabel('Time','FontSize',10);
% set(gcf,'Color',[0.85,0.7,1]);
% set(gcf,'InvertHardcopy','off');
% print('-dmeta');
% figure(2);set(gcf,'Color',[0.85,0.7,1])
% subplot(2,1,1); step(Gv, 0.03);title('Step
Response','FontSize',10);ylabel('Velocity
(rad/s)','FontSize',10);xlabel('Time','FontSize',10);
% subplot(2,1,2); impulse(Gv, 0.03);title('Impulse
Response','FontSize',10);ylabel('Velocity
(rad/s)','FontSize',10);xlabel('Time','FontSize',10);
% set(gcf,'InvertHardcopy','off');
% print('-dpng');
% figure(3);
% subplot(2,1,1); bode(Ga);
% subplot(2,1,2); bode(Gv);
% set(gcf,'Color',[0.85,0.7,1]);
% set(gcf,'InvertHardcopy','off');
% print('-dpng');
103
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% % % Proportional controller
% % % Position feedback
Gc = feedback(Ga*Kp,1);% Angle
Gc.InputName = 'Desired Angle';
% figure(5)
% rlocus(Gc)
% set(gcf,'Color',[0.85,0.7,1]);
% set(gcf,'InvertHardcopy','off');
% print('-dpng');
% figure(6);
% step(Gc,0:0.00001:t);title('Step
Response','FontSize',10);ylabel('Position
(rad)','FontSize',10);xlabel('Time','FontSize',10);
% set(gcf,'Color',[0.85,0.7,1]);
% set(gcf,'InvertHardcopy','off');
% print('-dpng');
% % % Velocity feedback
Gc = feedback(Gv*Kp,1);% Velocity
Gc.InputName = 'Desired velocity';
% figure(7); step(Gc,0:0.0005:t);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% PID controller
% Position feedback
Gc = feedback(Ga*C,1);% Angle
Gc.InputName = 'Desired Angle';
% Velocity feedback
Gc = feedback(Gv*C,1);% Velocity
Gc.InputName = 'Desired velocity';
% figure(9); step(Gc,0:0.0005:t);
end
104
BIOGRAPHY OF THE AUTHOR
Arezoo Ebrahimi was born in Booshehr Port, Iran on January 3rd, 1985. She
attended Bentolhoda Sadr high school in Shiraz. She then studied Industrial
105