You are on page 1of 3

Literature Review

Rough Work 3
Prothesis Software and Deep Machine Learning
 The MPL software systems consist of code running on the LC, SMCs, LMCs, and
FTSNs. The LC is the main processor of the limb and is responsible for receiving limb
control commands, running high-level control algorithms, and communicating with the
LMCs at 50 Hz, the SMCs at 200 Hz, and the FTSNs at 200 Hz. It also provides the
gateway for troubleshooting, diagnostics, and configuration of the limb system.
 The LMC software is responsible for providing closed-loop position, velocity, and torque
control of the motors in the upper arm and wrist joints of the MPL. The SMC software is
responsible for actuating the finger and thumb joints of the MPL hand by running closed-
loop position and velocity control of a BLDC motor.
 The SMC software is also responsible for interfacing with several contact sensors found
throughout the palm for tactile feedback. As mentioned previously, the FTSNs are found
on the fingertips of the hand and consist of a suite of sensors that obtain information from
the environment. The FTSN software samples the sensors at a rate of 400 Hz and reports
the data to the LC at a rate of 200 Hz for processing.
 The signal analysis subsystem processes the incoming (live or prerecorded) signals,
extracts signal features, and then converts the signals into a command to the limb system,
as shown in Figs. 1 and 2b. Signal feature extraction is used for pattern classification
algorithms to reduce dimensionality of input signal. The particular feature set and
classifier/regressor are chosen on the basis of the intended application and available
signal set. Algorithm parameters (weighting matrices, etc.) associated with these
algorithms are updated during a “training” phase during which processed input signals
are correlated to known movements.
 This process of training, or correlating input signals to intended movements, is an offline
process coordinated by the VIE that allows the algorithm parameters to be customized to
an individual limb user. In a myoelectric control scenario, the signal analysis subsystem
performs bandpass filtering of input signals, extracts time-domain features (e.g., mean
absolute value and number of zero crossings during a given time window), then uses a
linear classifier to select the most likely intended movement class, and finally issues
commands to the controls subsystem.
 Each of these processing elements can be modularly selected depending on the specific
application. Examples are provided in the Applications section. The output of the signal
analysis block is an action command that represents the high-level intent of the end user
(e.g., flex elbow, open hand, or move to location). These position and/or velocity
commands are transferred using the common interface either to a controls block or
directly to physical limb hardware outside of the simulation environment.
 The controls subsystem of the VIE receives high-level commands representing the intent
of the neuroprosthetic end user (e.g., flex elbow to 90°) and translates these commands
into low-level actuator commands (e.g., supply current for motor commutation). These
commands can be used for hardware-in-the-loop simulation or for high-fidelity motor
simulations. Using the combination of desired commands from the end user and feedback
from the modeled system, the controls block regulates low-level actuator commands to
achieve the desired system performance. The intent bus from the signal analysis
subsystem contains command information (Fig. 2) including joint commands, Cartesian
space commands, and movement macros (e.g., hand grasp patterns). Grasp commands
use customizable trajectories for each finger of the prosthetic device, allowing high-level
coordination to accomplish predefined hand conformations.
 The controls system modulates the impedance or apparent stiffness of selected joints,
allowing the limb to mimic the compliance of the natural limb. Modularity within the
controls subsystem provides the flexibility to evaluate the efficacy of a variety of control
modalities. This allows the end user to switch between a number of different control
strategies depending on the task at hand or whichever mode of operation the user prefers.
 During clinical research using the VIE, noninvasive myoelectric activity is acquired from
skin electrodes placed on the residual muscle of an amputee (Fig. 5a).3, 9 The signals are
fed to the VIE, which processes them by bandpass filtering the signals between 80 and
500 Hz and then extracts four time-domain features from the filtered signals for each
electrode (Fig. 5b). The four features are then classified into 14 movement classes (plus a
“no movement” class) using linear discriminant analysis.9 These signals, in conjunction
with the joint velocity signal (Fig. 5, c and d) are then used to control a virtual arm (Fig.
5e), as well as an actual prosthetic limb developed as part of the RP2009 program. The
control signals produce highly accurate movements, and execution times for these
movements are comparable to those of normal subjects. Because of its modularity, the
VIE can perform signal processing at one location and operate hardware at remote
locations. This concept was established and tested using the VIE framework to control a
virtual prosthetic limb at one location via the Internet using a neural decode algorithm of
a primate at a collaborator’s lab at a separate location.
 This is a form of machine learning uses both supervised and unsupervised and subset of
machine learning and AI. It uses the method of artificial neural network (ANN) with
representation learning. ANN is inspired by the human brain neural network system
whether human brain network is dynamic (Plastic) and analog at the same time the ANN
is static and symbolic. It can learn, memorize, generalized and prompted modeling of
biological neural system. ANNs are more effective to solve problems related to pattern
recognition and matching, clustering and classification.
 The ANN consist of standard three layer input, output and hidden layer, the output layer
can be the input layer for the next output the simple network of neural system shown
in Figure 3 [23], if there many hidden layer are present that ANN known as Deep Neural
Networks”, or briefly DNN, can be successfully expert to solve difficult problems. Deep
learning models yield results more quickly than standard machine learning approaches.
The propagation of function in ANN through input layer to output layer and the
mathematical representation. The artificial Intelligence in upper extremity prosthesis used
as direct control and indirect control from the neural network by various signal, sensor,
controller and algorithm.
 The control signals are coming from the human in the two form for operation of upper
extremity prosthesis i.e. electromyography (EMG) and Electroencephalogram (EEG).
Prior attempts at voluntary control of the elements of prosthesis have focused on the use
of electromyography (EMG) signals from muscle groups that remain under voluntary
control.
 The advancement in EMG control myoelectric prosthesis was with use of EMG pattern
recognition based control strategy [28]. This approach allows the user to control the
prosthesis with multiple degrees of freedom. The conventional Electromyography (EMG)
technique uses bipolar surface electrodes, placed over the muscle belly of the targeted
group of muscles. The electrodes are noninvasive, inexpensive, and readily incorporated
into the socket of the prosthesis. These surface electrode have limitations like inability to
record the signal from different muscle group at a time, inconsistency in signal magnitude
and frequency, due to change in skin electrode interface associated in physiological and
environmental modifications and also the EMG signals may encounter noise and
interference from other tissues.
 To enhance the quality of the signal the Myoelectric control of prosthesis or other system
utilizes the electrical action potential of the residual limb’s muscles that are emitted
during muscular contractions. These emissions are measurable on the skin surface at a
microvolt level. The emissions are picked up by one or two electrodes and processed by
band-pass filtering, rectifying, and low-pass filtering to get the envelope amplitude of
EMG signal for use as control signals to the functional elements of the prosthesis.
 The advance method over the conventional technique of EMG signal which replace the
complicated mode of switching is the pattern recognition. This new control approach is
stranded on the assumption that an EMG pattern contains information about the proposed
movements involved in a residual limb. Using a technique of pattern classification, a
variety of different intended movements can be identified by distinguishing
characteristics of EMG patterns.
 Once a pattern has been classified, the movement is implemented through the command
sent to a prosthesis controller. EMG pattern-recognition-based prosthetic control method
involves performing EMG measurement (to capture reliable and consistent myoelectric
signals), feature extraction (to recollect the most important discriminating information
from the EMG), classification (to predict one of a subset of intentional movements), and
multifunctional prosthesis control (to implement the operation of prosthesis by the
predicted class of movement).

You might also like