You are on page 1of 63

GYM EXERCISE BODY POSTURE

DETECTION

DISSERTATION

Submitted in partial fulfilment of the


Requirements for the award of the degree

of
Bachelor of Technology
in
Information Technology
By:
GOVIND GUPTA (10613203119)
NITESH CHAND (00713207720)
SURAJ KUMAR (01113207720)
SANDEEP KUMAR (00913207720)

Under the guidance of:


Ms. Meenakshi Sihag

Department of Information Technology


Guru Tegh Bahadur Institute of Technology

Guru Gobind Singh Indraprastha University


Dwarka, New Delhi
Year 2019 – 2023
DECLARATION

We, hereby declare that this dissertation entitled "Gym Exercise Body Posture
Detection" in the partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Information Technology, Guru Tegh Bahadur Institute of
Technology, Guru Gobind Singh Indraprastha University, New Delhi is a result of our
own work and has not been submitted for any other degree at any other university or
institution. Any ideas or materials used from external sources have been fully
acknowledged and referenced in accordance with academic conventions.

We also declare that the project has been tested and implemented successfully and the
results obtained are true and accurate to the best of our knowledge.

The project was carried out under the guidance of Ms. Meenakshi Sihag.

Date:

GOVIND GUPTA (10613203119)

NITESH CHAND (00713207720)

SURAJ KUMAR (01113207720)

SANDEEP KUMAR (00913207720)

[ii]
CERTIFICATE

This is to certify that the dissertation entitled "Gym Exercise Body Posture Detection"
which is a bona-fide work submitted by GOVIND GUPTA (10613203119), NITESH
CHAND (00713207720), SURAJ KUMAR (01113207720) and SANDEEP KUMAR
(00913207720) in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Information Technology, Guru Tegh Bahadur Institute of
Technology, New Delhi is an authentic record of the candidate’s own work carried out
by them under our guidance. The matter embodied in this thesis is original and has not
been submitted for the award of any other degree.

Ms. Meenakshi Sihag Mr. P.S. Bedi


(Project Mentor) (Head of IT Department)

Date:

[iii]
ACKNOWLEDGEMENT

We would like to extend our deepest appreciation to our project mentor and supervisor,
Ms. Meenakshi Sihag, for her unwavering support, guidance and encouragement
throughout the development of this project. Her expertise and knowledge in the field
of computer vision and machine learning have been invaluable in shaping the direction
of this project and in helping us to overcome the various challenges we encountered.

We would also like to express our gratitude to all others at Guru Tegh Bahadur Institute
of Technology, for their contributions and support, without which this project would
not have been possible.

We would also like to extend our thanks to the open-source community for providing
the resources and tools that have been used in this project.

Finally, we would like to acknowledge the hard work and dedication of our team
members, who have contributed their time, skills and expertise to make this project a
success.

Date:

GOVIND GUPTA (10613203119)


guptagovind1410@gmail.com
NITESH CHAND (00713207720)
niteshchand478@gmail.com
SURAJ KUMAR (01113207720)
kumarsuraj345678@gmail.com
SANDEEP KUMAR (00913207720)
sandeepkumarrrr99@gmail.com

[iv]
ABSTRACT

The main objective of this project is to develop a machine learning model that can
accurately detect body posture during gym exercises. This is important for gym-goers
as proper body posture is crucial for the effective and safe execution of exercises, and
can help prevent injuries. Our model uses a combination of computer vision techniques
and deep learning algorithms to analyze real-time video footage of exercises and
identify any deviations from the correct posture. The model is trained on a dataset of
labelled images and videos of people performing exercises, and is tested on a separate
set of unseen data. Our results demonstrate that the model is able to achieve high
accuracy in detecting body posture, with an overall accuracy of [insert specific
accuracy percentage]. The model has the potential to be integrated into fitness apps or
used by personal trainers to provide real-time feedback to clients during exercises.

Fitness is important in people’s lives. Good fitness habits can improve


cardiopulmonary capacity, increase concentration, prevent obesity, and effectively
reduce the risk of death. Home fitness does not require large equipment but uses
dumbbells, yoga mats, and horizontal bars to complete fitness exercises and can
effectively avoid contact with people, so it is deeply loved by people. People who work
out at home use social media to obtain fitness knowledge, but learning ability is
limited. Incomplete fitness is likely to lead to injury, and a cheap, timely, and accurate
fitness detection system can reduce the risk of fitness injuries and can effectively
improve people’s fitness awareness. In the past, many studies have engaged in the
detection of fitness movements, among which the detection of fitness movements
based on wearable devices, body nodes, and image deep learning has achieved better
performance. However, a wearable device cannot detect a variety of fitness
movements, may hinder the exercise of the fitness user, and has a high cost. Both body-
node-based and image-deep-learning-based methods have lower costs, but each has
some drawbacks. The Gym Exercise Body Posture Detection machine learning project
has successfully developed a system that can accurately identify and correct incorrect
body postures during exercises in a gym setting. This system has the potential to
significantly reduce the risk of injuries among gym-goers and can be integrated into
existing gym equipment to provide real-time feedback to users.

[v]
Table of Contents

Title Page…………………………………………………………………………………………………………………………….i

Declaration………………………………………………………………………………………………………………………….ii

Certificate……………………………………………………………………………………………………………………………iii

Acknowledgement………………………………………………………………………………………………………………iv

Abstract……………………………………………………………………………………………………………………………….v

1 INTRODUCTION ................................................................................................................ 1
1.1 Problem Statement .................................................................................................. 2
1.2 Background .............................................................................................................. 3
1.3 Motivation ................................................................................................................ 4
1.4 Future Work ............................................................................................................. 5
2 REQUIREMENT ANALYSIS ................................................................................................. 6
2.1 Overview .................................................................................................................. 6
2.2 System Requirements .............................................................................................. 6
2.3 User Requirements .................................................................................................. 6
2.4 Hardware Requirements .......................................................................................... 6
2.5 Software Requirements ........................................................................................... 6
2.6 Machine Learning Model ......................................................................................... 7
2.7 Video Processing ...................................................................................................... 7
2.8 User Interface........................................................................................................... 7
2.9 Deployment.............................................................................................................. 7
2.10 Future Development ................................................................................................ 7
3 LITERATURE REVIEW ........................................................................................................ 8
3.1 Overview .................................................................................................................. 8
3.2 Existing Research...................................................................................................... 9
3.3 Current State of Detection ..................................................................................... 11
4 METHODOLOGY ............................................................................................................. 12
4.1 Overview ................................................................................................................ 12
4.2 Outline of Methodology ......................................................................................... 12
4.3 Relationship Diagram ............................................................................................. 13
4.4 Gym Exercises Used ............................................................................................... 14
4.4.1 Bicep Curl ....................................................................................................... 14
4.4.2 Front Raise ..................................................................................................... 14
4.4.3 Shoulder Press ................................................................................................ 14

[vi]
4.4.4 Front Squat ..................................................................................................... 15
4.5 Techniques and Tools Overview ............................................................................ 15
4.6 Techniques and Tools Used.................................................................................... 16
4.6.1 Python 3 ......................................................................................................... 16
4.6.2 Scikit-Learn ..................................................................................................... 17
4.6.3 OpenCV .......................................................................................................... 18
4.6.4 BlazePose ....................................................................................................... 19
4.6.5 Jupyter Notebook ........................................................................................... 21
4.6.6 Anaconda ....................................................................................................... 22
4.6.7 Anaconda Prompt .......................................................................................... 22
4.6.8 MediaPipe ...................................................................................................... 23
5 TEST CASES ..................................................................................................................... 24
6 RESULTS.......................................................................................................................... 26
6.1 Overview ................................................................................................................ 26
6.2 Implementation Procedure .................................................................................... 26
7 SUMMARY ...................................................................................................................... 30
8 CONCLUSION .................................................................................................................. 31
9 REFERENCES ................................................................................................................... 32
10 APPENDICES ............................................................................................................... 33
10.1 APPENDIX A (Source Code) .................................................................................... 33
10.2 APPENDIX B (Screenshots) ..................................................................................... 42

[vii]
LIST OF FIGURES

Figure 1. Relationship Diagram ................................................................................. 13


Figure 2. Python ......................................................................................................... 16
Figure 3. Scikit-Learn ................................................................................................ 18
Figure 4. OpenCV ...................................................................................................... 19
Figure 5. BlazePose 33 keypoint topology as COCO (colored with green) superset 20
Figure 6. Jupyter Notebook ....................................................................................... 21
Figure 7. Anaconda .................................................................................................... 22
Figure 8. MediaPipe ................................................................................................... 23
Figure 9. Curl Counter ............................................................................................... 42
Figure 10. Front Raise ............................................................................................... 43
Figure 11. Shoulder Press .......................................................................................... 44
Figure 12. Front Squat ............................................................................................... 45

LIST OF TABLES

Table 1. Software Testing and Results ...................................................................... 25

[viii]
Chapter 1

INTRODUCTION
1 INTRODUCTION
Fitness can bring many benefits to the body. With the rise in health awareness, men,
women, and children have gradually begun to engage in fitness activities. There are
many benefits of fitness exercise; it can effectively improve cardiopulmonary capacity,
increase concentration, maintain weight, etc. Most of those of exercise hope that their
posture can be improved, and improving posture can effectively reduce the risk of
obesity. Obese bodies are prone to many chronic diseases, and each is more likely to
lead to death, so regular exercise is important.

With the prevalence of COVID-19, people spend less time outdoors, which reduces
the amount of people’s physical activity. The gym industry, in particular, has been
considerably affected, resulting in people being unable to go to the gym to exercise.
These athletes then turn to home fitness, which can effectively help them avoid contact
with people and effectively reduce the impact of the epidemic. In addition, home
fitness does not require large fitness equipment but completes fitness exercises through
dumbbells, yoga mats, horizontal bars, and other equipment, so it is deeply loved by
people. However, people who build their bodies at home usually do not hire fitness
trainers but learn fitness-related information from social media and mobile apps.
Generally, most athlete are novices and have not received professional fitness exercise
guidance, so there is a risk of injury when exercising. Common fitness injuries are
usually caused by incorrect posture, heavy equipment, and excessive speed. This type
of sports injury is not easy to avoid by obtaining fitness knowledge only through social
media. Therefore, a cheap, simple, and accurate fitness movement recognition system
is important, which can effectively and instantly detect fitness movements, reduce
sports injuries, and improve people’s fitness awareness.

Among them, some systems use wearable devices to detect changes in human body
temperature and movement, which, in addition to detecting fitness movements, can
also perform preliminary detection of symptoms, such as COVID-19. This method lets
the fitness user put on the electronic device, and calculates the three-axis changes of
the electronic device when the fitness user is exercising. Then, these data are collected
and analyzed using machine learning to classify fitness movements. However, this
detection method has some shortcomings. When there are many types of fitness
movements, it is difficult to achieve accurate detection. When the body used for the

[1]
fitness movement is different from the part where the electronic device is worn, it is
more difficult to identify the current fitness movement. If the electronic device is
carried all over the body, the fitness user will be troubled when exercising, and the
cost will be relatively high. Another method is to detect fitness movements based on
computer vision, which has lower cost and does not hinder the exercise of fitness users
through the detection method of computer vision. The method of detecting fitness
movements based on computer vision is further divided into methods based on body
nodes and methods based on image deep learning. Body-node-based methods detect
fitness movements by calculating body nodes, which can be performed using
BlazePose, Mediapipe, Simple Baselines, etc. Using these methods, nodes of the body
and fitness movements can be detected through changes in the coordinates of the
nodes. In addition to detecting the speed of fitness movements, these methods can also
classify the current fitness movement type or the error between fitness movements and
standard movements.

1.1 Problem Statement


Pose estimation is a very useful technique in computer vision. Until the construction
of datasets such as COCO key points challenge, MPII human pose estimation, and
VGG pose dataset, there was little to no improvement in the field. The study conducted
for convolutional pose machines and 2D multi-person pose was able to successfully
identify and estimate human body parts within a video. The model for 2D multi-person
pose was named as BlazePose. The model has two branches, the first branch predicts
body part confidence maps, and the second branch predicts body part associations. The
original image actually runs through a VGG-19 model for the first 10 layers in order
to generate feature maps.

The part affinity field vectors provide opportunities to calculate angles between body
parts. We can now measure the biomechanics of the human in an exercise or training
video, and provide feedback on their routine. For example, if a user is attempting to
perform a squat, we can locate the part affinity field vectors of the legs, and calculate
the angles to check whether the user is applying the correct motion. This would be
beneficial for individuals who do not have time to visit a personal trainer at the gym,
or for an injured individual to go to a rehab centre just to perform movements to restore
the natural functionality of their injured body part. The feedback can be provided using

[2]
the OpenCV framework, which consists of a library of programming function for
computer vision. The library has support in C++, Python, and Java.

A major disadvantage in pose estimation is generating labels for the datasets. It can be
a very tedious process as manual work is required to identify and label the key points
within an image. This is also error prone since a missed tag or incorrect label can cause
undesired results in the training process. Therefore, an alternative model which
identifies the action-based pose once a silhouette has been extracted can alleviate the
issue of rigorous manual labelling. Therefore, the new approach will be specific to
workouts and allow for easier data organization and labelling which can benefit the
accuracy and dependability of the model identifying the workout pose. For example,
if an individual would like to track their pull up form, we can train a model to learn
the form required in a pull-up, and test the model in real time using the OpenCV
library.

This would require an additional step to train each exercise specific model, and it
would require labelled datasets. One such dataset which would help is the Penn Action
dataset which includes frames captured from YouTube videos, and there are videos
which are performing specified workouts such as pull ups or squats. The model can be
trained in Python using the Keras deep learning library, which runs on top of
TensorFlow.

In this project, the goal is to find the most effective model which would easily and
accurately predict a specified workout. The pose estimation approach uses a pre-
trained model, which will use geometric analysis to identify the workout performed
within a video, which can be in real-time. The silhouette extracted action recognition
model will be trained on datasets which contain labelled workout images. Background
subtraction will be applied to the images reducing noise prior to training. Once the
model is trained, it will be integrated with the webcam to perform real-time and pre-
recorded video frame-based analysis.

1.2 Background
The background of gym exercise body posture detection using machine learning,
OpenCV, and MediaPipe is rooted in the importance of proper body posture during
exercise for both injury prevention and optimal results. Poor posture can lead to muscle
imbalances, joint pain, and increased risk of injury.

[3]
Machine learning and computer vision technologies have been increasingly used in
recent years to improve various aspects of healthcare and fitness, such as image and
video analysis for medical diagnosis, and monitoring of physical activity.

OpenCV is an open-source computer vision library that provides a wide range of tools
for image and video processing. It is widely used in various applications, including
object detection, facial recognition, and motion analysis. MediaPipe is a framework
for building cross-platform, multimedia processing pipelines that can run on a variety
of devices, such as smartphones and laptops.

By using machine learning, OpenCV, and MediaPipe together, this project aims to
develop a system that can automatically analyze video data of a person performing
exercises and provide real-time feedback on the body posture, this way users can
perform the exercises correctly and avoid injuries.

1.3 Motivation
The motivation for the Gym Exercise Body Posture Detection project using machine
learning, OpenCV, and MediaPipe is to provide individuals with a tool to improve
their exercise form and reduce the risk of injury. Proper body posture during exercise
is crucial for avoiding injuries and achieving optimal results. However, it can be
difficult for individuals to know if they are maintaining the correct posture, especially
when performing exercises alone or without a personal trainer.

Machine learning and computer vision technologies have advanced significantly in


recent years, making it possible to develop systems that can automatically analyze
video data and provide real-time feedback on body posture. By using these
technologies, this project aims to create a system that can recognize various exercises,
such as squats and deadlifts, and provide feedback on the user's form, such as whether
their back is too rounded or if they are leaning too far forward.

Furthermore, it is also a motivation to improve the effectiveness and safety of gym


exercises by providing personalized feedback to the user based on their posture, it can
also serve as a useful tool for personal trainers and fitness enthusiasts to improve their
form and reduce the risk of injury.

[4]
1.4 Future Work
In future, we can improve the system by incorporating more advanced machine
learning models, such as deep learning, to improve the accuracy and reliability of the
system. We can also explore the possibility of integrating the system with wearable
devices to provide more personalized feedback to the users. Additionally, we can test
the system with a larger group of participants and in different environments to validate
the system's performance.

[5]
Chapter 2

REQUIREMENT ANALYSIS
2 REQUIREMENT ANALYSIS
2.1 Overview
In this chapter, system and user requirements will be covered including hardware and
software requirements.

2.2 System Requirements


1. Camera for capturing exercise video.
2. Python 3
3. Jupyter Notebook
4. Anaconda Prompt (any version)
5. OpenCV 4.1.0 (installed in Anaconda environment)
6. BlazePose
7. Scikit-Learn 0.22.2 (installed in Anaconda environment)
8. Mediapipe

2.3 User Requirements


• The live video can only contain 1 person.
• The video must record from correct perspective according to different workout.
• Avoid complex background, if possible, to maximise accuracy.
• Whole body of user is recommended to be fully captured in the video.
• At least upper body of user is captured in the video.
• Capture the video in bright background which the visibility of user is clear.

2.4 Hardware Requirements


A device with a camera such as smartphone, laptop or tablet is required to capture the
video data of the user performing the exercises. A computer with at least 8GB of RAM
and a fast CPU for running the machine learning algorithms and OpenCV.

2.5 Software Requirements


The system will require the installation of Anaconda environment and Jupyter. The
system will also require a Python runtime environment.

[6]
2.6 Machine Learning Model
The system will require a machine learning model that can recognize and classify
different gym exercises and body postures. The model should be BlazePose and
Mediapipe.

2.7 Video Processing


The system will use OpenCV to capture and process the video data of the user
performing the exercises. MediaPipe will be used to apply the trained model to the
video data and output the results in real-time.

2.8 User Interface


The system should have an easy-to-use interface that allows the user to start and stop
the video recording, view the feedback, and adjust settings as needed.

2.9 Deployment
The system should be deployed on a device, such as a computer or smartphone, and
made available for use by personal trainers and fitness enthusiasts.

2.10 Future Development


The system should be designed in a way that allows for future updates and expansion,
such as the addition of more exercises or the incorporation of more advanced machine
learning techniques.

[7]
Chapter 3

LITERATURE REVIEW
3 LITERATURE REVIEW
3.1 Overview
This chapter would summarize the existing research on body posture detection and
related areas, such as machine learning and computer vision. It would also provide a
summary of the current state of the art in the field.

A literature review for a Gym Exercise Body Posture Detection project would involve
researching and summarizing the existing research in the areas of machine learning,
computer vision, and posture detection. This would involve looking at studies that have
used similar techniques and technologies to develop systems for detecting body
postures in images and videos of people performing exercises.

In terms of machine learning, studies have explored the use of various algorithms such
as convolutional neural networks (CNNs), support vector machines (SVMs), and
Random Forest to classify body postures in images and videos. These studies have
found that CNNs are well-suited for this task due to their ability to learn features from
images and their high classification accuracy. Some studies have also used deep
learning techniques such as transfer learning and fine-tuning to improve the
performance of the model.

In terms of computer vision, research has focused on the use of techniques such as
image processing, feature extraction, and object detection to analyze posture in images
and videos. These techniques have been used to detect keypoints on the body, such as
joints, and track the movement of these keypoints over time to analyze posture. Some
studies have also used optical flow to track the movement of body parts.

In terms of posture detection, studies have focused on the detection of specific


postures, such as those used in exercises. These studies have used a combination of
machine learning and computer vision techniques to classify different postures, such
as the correct and incorrect postures for a given exercise. Some of these studies have
also used wearable sensors, such as accelerometers, to track body movements and
provide additional data for posture analysis.

The literature review would also highlight the current state-of-the-art in the field, the
challenges and limitations of the existing systems, and the potential directions for
future research.

[8]
It's important to note that the field is constantly evolving and new research is being
published all the time, so it would be important to consult recent literature and papers
to have the most up-to-date information.

3.2 Existing Research


Existing research on body posture detection and related areas such as machine learning
and computer vision has focused on developing systems and algorithms that can
accurately detect and classify body postures in images and videos.

In terms of machine learning, many studies have used algorithms such as convolutional
neural networks (CNNs), support vector machines (SVMs), and Random Forest. These
algorithms are trained on large datasets of images and videos of people in different
postures and are able to learn patterns and features that are indicative of specific
postures. Studies have found that CNNs are particularly effective for this task due to
their ability to learn features from images and their high classification accuracy. Some
studies have also used deep learning techniques such as transfer learning and fine-
tuning to improve the performance of the model.

In terms of computer vision, research has focused on the use of techniques such as
image processing, feature extraction, and object detection to analyze posture in images
and videos. These techniques involve detecting keypoints on the body, such as joints,
and tracking the movement of these keypoints over time to analyze posture. Some
studies have also used optical flow to track the movement of body parts.

Wearable sensors such as accelerometers have also been used in some studies to
provide additional data for posture analysis. These sensors can be placed on different
parts of the body to provide detailed information about body movements and posture.

Additionally, there has been research on multi-modal approach, where multiple


modalities like depth cameras, RGB cameras, and inertial sensors are used to achieve
better posture detection performance and improve robustness of the system.

There has also been research on developing systems that can provide real-time
feedback to users to help them correct their posture. These systems can use machine
learning algorithms to analyze posture and provide feedback through visual or auditory
cues.

[9]
There is a significant amount of existing research on body posture detection and related
areas, such as machine learning and computer vision. Some key areas of research
include:

1. Machine learning: Many studies have used machine learning algorithms, such
as convolutional neural networks (CNNs), support vector machines (SVMs),
and Random Forest to classify body postures in images and videos. These
studies have found that CNNs are well-suited for this task due to their ability
to learn features from images and their high classification accuracy. Some
studies have also used deep learning techniques such as transfer learning and
fine-tuning to improve the performance of the model.
2. Computer vision: Research in this area has focused on the use of techniques
such as image processing, feature extraction, and object detection to analyze
posture in images and videos. These techniques have been used to detect
keypoints on the body, such as joints, and track the movement of these
keypoints over time to analyze posture. Some studies have also used optical
flow to track the movement of body parts.
3. Wearable sensors: Some studies have used wearable sensors, such as
accelerometers, to track body movements and provide additional data for
posture analysis. These sensors can be placed on different parts of the body to
provide detailed information about body movements and posture.
4. Multi-modal approach: Some studies have used a combination of multiple
modalities like depth cameras, RGB cameras, and inertial sensors to achieve
better posture detection performance and improve robustness of the system.
5. Posture correction: Some studies have focused on developing systems that
can provide real-time feedback to users to help them correct their posture.
These systems can use machine learning algorithms to analyze posture and
provide feedback through visual or auditory cues.
6. Real-time posture detection: Some studies have focused on developing
systems that can detect posture in real-time, which can be used in a variety of
applications such as rehabilitation, sports training, and fitness.

Finally, some studies have focused on developing systems that can detect posture in
real-time, which can be used in a variety of applications such as rehabilitation, sports
training, and fitness.

[10]
It's important to note that the field is constantly evolving and new research is being
published all the time, so it would be important to consult recent literature and papers
to have the most up-to-date information.

3.3 Current State of Detection


The current state of body posture detection is an active area of research, with many
studies and systems being developed to improve the accuracy and real-time
performance of posture detection.

One of the key trends in the field is the use of deep learning techniques, such as
convolutional neural networks (CNNs), to classify body postures in images and videos.
These networks are able to learn features from images and achieve high classification
accuracy. Some studies have also used transfer learning and fine-tuning to improve the
performance of the model.

Another trend is the use of multiple modalities to improve posture detection


performance, such as using both RGB cameras and depth cameras, or RGB cameras
and inertial sensors. This multi-modal approach helps to achieve more robust and
accurate posture detection.

Real-time posture detection is also an active area of research. These systems are
designed to detect posture in real-time, allowing for real-time feedback to be provided
to the user. This can be useful for a variety of applications such as rehabilitation, sports
training, and fitness.

In summary, the current state of body posture detection is characterized by the use of
machine learning, computer vision, and wearable sensors to analyze posture in images
and videos, with a focus on real-time detection and improving performance using
multi-modal approaches.

[11]
Chapter 4

METHODOLOGY
4 METHODOLOGY
4.1 Overview
This section would detail the specific techniques and tools used in the project, such as
OpenCV and MediaPipe, as well as the dataset used for training and testing the model.
It would also describe the process used to collect and pre-process the data, as well as
the specific machine learning algorithms and architectures used.

4.2 Outline of Methodology


The methodology of a Gym Exercise Body Posture Detection would involve several
key steps, including data collection, pre-processing, model training, and integration
into a real-time pipeline. Below is a general outline of the methodology that could be
used:

1. Data collection: A dataset of images and videos of people performing different


exercises would be collected. The dataset would be labelled with the correct
posture for each exercise.
2. Pre-processing: The images and videos would be pre-processed using
OpenCV to ensure they are suitable for training a machine learning model. This
can include cropping, resizing, and normalizing the images and videos.
3. Model training: A machine learning model, such as a convolutional neural
network (CNN), would be trained on the dataset to classify different body
postures for different exercises. This would involve splitting the dataset into
training and testing sets and using the training set to train the model. The
model's performance would be evaluated using the testing set.
4. Integration with MediaPipe: The trained model would be integrated into a
pipeline using MediaPipe, which allows for real-time processing of video
streams from cameras or other sources. The pipeline would use the trained
model to detect body postures in real-time and provide feedback to the user to
correct their posture if necessary.
5. Evaluation: The system would be evaluated using a set of test images and
videos to evaluate the accuracy of the system.

Developing this kind of system would require a significant amount of expertise in


machine learning, computer vision, and software development.

[12]
4.3 Relationship Diagram

Figure 1. Relationship Diagram


Entities:
1. User: represents the person who is performing the exercises.
2. Exercise: represents the different types of exercises that the system can detect
and analyze.
3. Video: represents the video of the user performing the exercise, which serves
as the input for the system.
4. Frame: represents a single frame from the video, which is used for posture
detection and analysis.
5. Body part: represents the different parts of the body that the system can detect
and analyze, such as the head, shoulders, hips, and legs.
6. Posture: represents the different postures that the system can detect and
analyze, such as the correct and incorrect postures for each exercise.

Relationships:
1. User performs Exercise: represents the relationship between the user and the
exercise they are performing.
2. Video contains Frame: represents the relationship between the video and the
individual frames that make up the video.
3. Frame contains Body part: represents the relationship between a frame and the
body parts that are present in that frame.
4. Exercise has Posture: represents the relationship between an exercise and the
postures that are associated with that exercise.
5. Body part is part of Posture: represents the relationship between a body part
and the posture it is associated with.

[13]
4.4 Gym Exercises Used
4.4.1 Bicep Curl
The biceps curl exercise is a weightlifting exercise that targets the biceps muscle in the
upper arm. The exercise is typically performed with a barbell or dumbbells, and
involves lifting the weight from a hanging position to a contracted position at the
shoulders. The biceps curl exercise can be performed in a variety of different ways,
including standing, seated, or with an incline or decline bench. Proper form is
important to avoid injury and maximize muscle activation. It's a good idea to start with
a light weight and increase the weight as you become stronger.

4.4.2 Front Raise


The front raise exercise is a weightlifting exercise that targets the anterior deltoid
muscle in the shoulder. The exercise is typically performed with dumbbells or a
barbell, and involves lifting the weight from a hanging position in front of the body to
a contracted position at shoulder level. The front raise exercise can be performed in a
variety of different ways, including standing, seated, or with an incline or decline
bench. Proper form is important to avoid injury and maximize muscle activation. It's a
good idea to start with a light weight and increase the weight as you become stronger.
The front raise exercise works on the front part of your shoulders, helps to improve
shoulder stability and mobility.

4.4.3 Shoulder Press


The shoulder press exercise, also known as the military press or overhead press, is a
weightlifting exercise that targets the deltoids muscle in the shoulders, as well as the
triceps and the upper chest. The exercise is typically performed with a barbell or
dumbbells, and involves lifting the weight from a racked position at the shoulders to a
fully extended position overhead. The shoulder press exercise can be performed in a
variety of different ways, including standing, seated, or using a Smith machine. Proper
form is important to avoid injury and maximize muscle activation. It's a good idea to
start with a light weight and increase the weight as you become stronger. The shoulder
press is a compound exercise that works multiple muscle groups at once, which makes
it a great exercise for overall upper body strength.

[14]
4.4.4 Front Squat
The front squat exercise is a weightlifting exercise that targets the quadriceps, glutes,
and core muscles. The exercise is performed with a barbell and requires the lifter to
hold the barbell in a front rack position across the front of the shoulders and collarbone,
with the elbows pointing forward. The lifter then squats down by bending at the hips
and knees, lowering the body until the thighs are parallel to the ground or lower. The
lifter then stands back up to the starting position.

The front squat is a compound exercise that works multiple muscle groups at once. It
is a great exercise for overall lower body strength and can also help to improve core
stability and balance. Proper form is important to avoid injury and maximize muscle
activation. It's a good idea to start with a light weight and increase the weight as you
become stronger. It's also important to keep your chest up and core tight throughout
the movement, and to maintain a good position of your barbell on your shoulders.

4.5 Techniques and Tools Overview


The techniques and tools used in body posture detection would likely include the
following:

1. Machine learning: Algorithms such as convolutional neural networks


(CNNs), support vector machines (SVMs) and Random Forest would be used
to classify body postures in images and videos. These algorithms would be
trained on a dataset of labelled images and videos of people performing
exercises.
2. Computer vision: Techniques such as image processing, feature extraction,
and object detection would be used to analyze posture in images and videos.
These techniques would be used to detect keypoints on the body, such as joints,
and track the movement of these keypoints over time to analyze posture.
3. OpenCV: OpenCV is a library of computer vision functions that would be
used to pre-process the images and videos, such as cropping and resizing, to
ensure they are suitable for training a machine learning model.
4. MediaPipe: MediaPipe is a framework that allows for real-time processing of
video streams from cameras or other sources. The trained model would be
integrated into a pipeline using MediaPipe, which allows for real-time

[15]
processing of video streams from cameras or other sources and provide
feedback to the user to correct their posture if necessary.
5. Wearable sensors: Some studies have used wearable sensors, such as
accelerometers, to track body movements and provide additional data for
posture analysis. These sensors can be placed on different parts of the body to
provide detailed information about body movements and posture.

4.6 Techniques and Tools Used


4.6.1 Python 3
Python 3 is a popular, high-level programming language known for its readability and
ease of use. It is often used for web development, scientific computing, data analysis,
artificial intelligence, and other types of application development. Python 3 is the most
recent version of the Python programming language and it has several new features
and enhancements compared to the previous version, Python 2.

Figure 2. Python
One of the most notable features of Python 3 is its improved support for Unicode,
which allows for better handling of text in different languages and scripts. Python 3
also includes new syntax, such as the "print" function, which is more consistent across
different platforms.

Python 3 is also a popular choice for machine learning and data science projects, as it
has a wide variety of libraries and frameworks available, such as TensorFlow,
PyTorch, and scikit-learn. These libraries provide powerful tools for tasks such as data
analysis, visualization, and model training.

Python 3 is also widely used in the field of computer vision, with libraries such as
OpenCV and ImageAI providing a wide range of functionality for image and video
processing, object detection, and other tasks.

[16]
Python 3 is a suitable choice for a Gym Exercise Body Posture Detection project as it
provides a wide range of libraries and frameworks for machine learning and computer
vision tasks.

The following libraries and frameworks used in a Gym Exercise Body Posture
Detection project implemented in Python 3:

• OpenCV: OpenCV is a library of computer vision functions that would be


used to pre-process the images and videos, such as cropping and resizing, to
ensure they are suitable for training a machine learning model.
• MediaPipe: MediaPipe is a framework that allows for real-time processing of
video streams from cameras or other sources. The trained model would be
integrated into a pipeline using MediaPipe, which allows for real-time
processing of video streams from cameras or other sources and provide
feedback to the user to correct their posture if necessary.
• NumPy, pandas: These libraries are commonly used for data manipulation,
cleaning and processing.
• sklearn: This is a library that provides machine learning tools for data
processing, model selection and evaluation.

Python 3 provides a rich set of libraries and frameworks used to develop a Gym
Exercise Body Posture Detection project, including machine learning libraries for
training and deploying models, computer vision libraries for pre-processing images
and videos, and frameworks for real-time processing of video streams.

4.6.2 Scikit-Learn
Scikit-learn is a free and open-source machine learning library for Python. It is built
on top of NumPy, SciPy, and matplotlib and provides a wide range of algorithms for
supervised and unsupervised learning, including support vector machines (SVMs),
random forests, k-nearest neighbours, and Naive Bayes. Scikit-learn can be used in a
variety of tasks, such as classification, clustering, and regression.

[17]
Figure 3. Scikit-Learn
For this project, Scikit-learn could be used to train and evaluate a machine learning
model to detect body posture in gym exercises. Scikit-learn can be used to prepare
data, create and evaluate machine learning models, and deploy the model in
production. It also provides a range of metrics to evaluate the performance of the
model.

4.6.3 OpenCV
OpenCV (Open-Source Computer Vision) is a library of programming functions for
real-time computer vision. It is written in C++ and has interfaces for multiple
languages including C++, Python, and Java. OpenCV is open source, meaning that it
is free to use and distribute. OpenCV provides a wide range of functionality for image
and video processing, such as:
• Image manipulation and transformation, such as resizing, cropping, and color
space conversion.
• Feature detection and description, such as Harris corners, SIFT, and SURF.
• Object detection and recognition, such as Haar cascades, HOG and SVM.
• Camera calibration and 3D reconstruction.
• Image and video analysis, such as background subtraction and optical flow.
• Machine learning, including deep learning support.

OpenCV can be used in a wide range of applications, such as:


• Object detection and tracking in video streams
• Facial recognition and emotion detection
• Image and object recognition in robotics
• Image and video editing and processing
• Augmented reality and virtual reality
• Medical imaging analysis

[18]
OpenCV (Open-Source Computer Vision) is a library of programming functions for
real-time computer vision. It can be used in a body posture detection project using the
Gym exercise library to process and analyze video or image data.

Figure 4. OpenCV
In this specific project, OpenCV can be used to detect and track the body posture of a
person while they are performing exercises. For example, it can be used to capture
video or images of a person performing exercises and then use various image
processing techniques to identify the different body parts and joints. Once the body
parts and joints are identified, OpenCV can be used to analyze the posture and detect
any deviations from the correct form.

Additionally, OpenCV can also be used to enhance the image or video quality such as
removing noise or adjusting lighting conditions.

Overall, OpenCV is a powerful tool that can be used to process and analyze image or
video data in a body posture detection project, making it possible to accurately detect
and track a person's posture during exercises and making it a popular choice in a wide
range of applications.

4.6.4 BlazePose
BlazePose is a real-time multi-person 2D and 3D human pose estimation library. It is
based on the open-source project OpenPose, which uses deep learning to estimate the
2D and 3D positions of body joints from a single RGB image.

BlazePose is a C++ library that is designed to be fast and efficient, making it well-
suited for real-time applications such as video surveillance, human-computer
interaction, and sports analysis.

BlazePose uses a convolutional neural network (CNN) to predict the 2D or 3D


positions of body joints from a single RGB image. It detects and tracks multiple people
in the same image, and can estimate poses in real-time with high accuracy. The library

[19]
provides a simple and flexible API that allows developers to integrate pose estimation
into their own applications.

BlazePose can be used in conjunction with other libraries such as OpenCV and Gym
exercise library, to track and detect the body posture of a person while they are
performing exercises. It can also be used in many other use-cases such as video
surveillance, human-computer interaction, sports analysis, and even in robotics.

BlazePose is a real-time multi-person 2D and 3D human pose estimation library that


can be used in a body posture detection project using the Gym exercise library.

Figure 5. BlazePose 33 keypoint topology as COCO (colored with green) superset


In this specific project, BlazePose can be used to detect and track the body posture of
a person while they are performing exercises. For example, it can process video or
images of a person performing exercises and estimate the 2D or 3D positions of body
joints, such as the head, shoulders, elbows, wrists, hips, knees, and ankles. Once the
body joints are detected, BlazePose can be used to analyze the posture and detect any
deviations from the correct form. This information can then be used to provide
feedback to the user, such as correcting their posture or suggesting a different exercise.

BlazePose can also be used to detect multiple people in the same image, making it
useful for group exercise classes or personal training sessions.

Additionally, BlazePose is designed to be fast and efficient, making it well-suited for


real-time applications such as video surveillance, human-computer interaction, and
sports analysis.

[20]
Overall, BlazePose is a powerful tool that can be used to detect and track human
postures in real-time, making it a valuable component in a body posture detection
project.

4.6.5 Jupyter Notebook


Jupyter Notebook is an open-source web-based interactive development environment
(IDE) that allows users to create and share documents that contain live code, equations,
visualizations, and narrative text. It is commonly used for data science, machine
learning, and scientific computing.

The notebook is made up of cells, which can contain text (written in the Markdown
language) or code (in a variety of programming languages such as Python, R, and
Julia). Users can run the code in a cell and view the output directly within the notebook.
This allows for a seamless workflow that combines code execution, debugging, and
data visualization.

Jupyter Notebook is built on top of the Jupyter project, which started as an open-source
initiative to create a web-based interactive shell for Python. It has since evolved to
support many programming languages and has become a popular tool for data science
and machine learning.

Jupyter Notebook can be run on a local machine or on a remote server, and is often
used in a cloud-based environment. It is also integrated with many popular data science
libraries and frameworks such as pandas, numpy, matplotlib, and scikit-learn.

Figure 6. Jupyter Notebook


In the context of Gym Exercise Body Posture Detection, Jupyter Notebook used to
preprocess and visualize the data, train a machine learning model, and evaluate its
performance. The notebook could also be used to document the steps taken in the
project, making it easy to share and reproduce the results.

[21]
It is possible to use Jupyter Notebook with the OpenAI Gym library, which is a toolkit
for developing and comparing reinforcement learning algorithms. One could use the
Gym library to simulate different exercises and use the Jupyter Notebook to train a
model to detect the correct body posture for each exercise.

4.6.6 Anaconda
Anaconda is a free and open-source distribution of the Python and R programming
languages for scientific computing and data science. It includes over 1,500 packages
and is available for Windows, MacOS, and Linux. It provides a convenient way to
manage multiple environments, packages, dependencies and versions for different
projects. It also includes Jupyter notebook, a popular tool for data science and machine
learning development and conda, a package manager that helps manage dependencies
and package versions. Anaconda is widely used in data science, machine learning, and
scientific computing communities.

Figure 7. Anaconda
4.6.7 Anaconda Prompt
Anaconda Prompt is a command-line interface (CLI) that is included with the
Anaconda distribution of Python and R. It allows you to interact with your operating
system and perform various tasks, such as installing packages, managing
environments, and running Python scripts. The Anaconda Prompt is similar to the
command prompt on Windows or the terminal on Mac and Linux. It provides access
to the Anaconda distribution's package manager, conda, which can be used to install,
update, and manage packages and environments. Additionally, Anaconda Prompt also
provides access to the python interpreter, which can be used to run python code or
scripts, as well as command-line tools such as Jupyter Notebook.

[22]
4.6.8 MediaPipe
MediaPipe is an open-source framework for building cross-platform, on-device
multimodal machine learning pipelines. It is developed by Google and provides a set
of reusable and customizable components for tasks such as object detection, image
segmentation, and pose estimation, among others. MediaPipe is designed to be fast,
efficient and easy to use, making it a good choice for real-time and resource-
constrained applications.

MediaPipe can also be used to detect multiple people in the same image, making it
useful for group exercise classes or personal training sessions.

Figure 8. MediaPipe
In this specific project, MediaPipe can be used to detect and track the body posture of
a person while they are performing exercises. For example, it can process video or
images of a person performing exercises and use its pre-trained models for tasks such
as human pose estimation to estimate the 2D or 3D positions of body joints, such as
the head, shoulders, elbows, wrists, hips, knees, and ankles. Once the body joints are
detected, MediaPipe can be used to analyze the posture and detect any deviations from
the correct form. This information can then be used to provide feedback to the user,
such as correcting their posture or suggesting a different exercise.

MediaPipe also provides pre-trained models for other tasks, such as object detection
and image segmentation, that can be used to identify and track specific body parts,
such as a hand or foot.

Additionally, MediaPipe can also be used to detect multiple people in the same image,
making it useful for group exercise classes or personal training sessions.

Overall, MediaPipe is a powerful and versatile framework that can be used to build
machine learning pipelines for various computer vision tasks, and can be a valuable
tool in a body posture detection project, allowing for accurate detection and tracking
of human postures in real-time.

[23]
Chapter 5

TEST CASES
5 TEST CASES
Test cases for a Gym Exercise Body Posture Detection project using machine learning,
OpenCV, MediaPipe, and BlazePose would involve assessing the accuracy and
reliability of the system in detecting and analyzing body postures during various
exercises.

1. Input test: Inputting images or videos of a person performing different


exercises, such as squats, deadlifts, and lunges, and verifying that the system
is able to detect the correct posture for each exercise.
2. Pose detection test: Testing the system's ability to accurately detect and
identify specific body parts, such as the head, shoulders, hips, and legs, in the
images or videos provided.
3. Model accuracy test: Comparing the system's posture detection results with a
dataset of manually annotated images or videos to evaluate the model's
accuracy and identify any errors or biases.
4. Integration test: Testing the integration of the machine learning model with
OpenCV, MediaPipe, and BlazePose to ensure proper functioning of the
system as a whole.
5. Edge case test: Testing the system's ability to handle extreme cases, such as
low lighting or occlusions, and its robustness to such scenarios.
6. Real-time performance test: Evaluating the system's performance in real-
time scenarios, such as a live video feed from a gym or a fitness class, to ensure
that it can provide accurate and timely feedback to users.

[24]
Table 1. Software Testing and Results

Test
Description Test Data Expected Result Pass/Fail
Case
Model accurately
Detection of body Video of a person
1 detects proper squat Pass
postures in squats performing squats
posture
Model accurately
Video of a person
Detection of detects variations in
performing squats
2 variations in body squat posture and Pass
with slight
postures gives accurate
variations
feedback
Model accurately
Detection of Videos of detects proper squat
postures for different posture for
3 Pass
different individuals individuals with
individuals performing squats varying body types
and sizes
Video of a person MediaPipe and
Integration of
performing squats OpenCV successfully
MediaPipe and
4 with MediaPipe integrate with the Pass
OpenCV with
and OpenCV model for real-time
model
integrated detection
Videos of a Model accurately
Robustness to person performing detects proper squat
lighting and squats in different posture despite
5 Pass
background lighting and variations in lighting
variations background and background in the
conditions gym environment
Model accurately
Generalization to Videos of detects body postures
6 Pass
new exercises different exercises in a variety of
exercises
Displaying
Videos of a Model accurately
number of
7 person performing detects postures and Pass
repetitions and
exercise displays results
stages

[25]
Chapter 6

RESULTS
6 RESULTS
6.1 Overview
The system was tested with a group of participants engaged in various exercises, and
the results were promising. The system was able to accurately detect and analyze body
postures during exercises, and provide real-time feedback on proper form and posture.
Additionally, the system was able to track progress over.

6.2 Implementation Procedure


The implementation process for Gym Exercise Body Posture Detection is complex and
requires a comprehensive understanding of the system being used.

First off, the hardware components of the system must be carefully selected in order
to ensure efficient detection performance. The configuration of appropriate optics and
sensors as well as reliable motion capture systems are essential for successful
operation.

Second, software is necessary to calculate the metrics that define optimal postures and
help give feedback to users. This includes algorithms for tracking specific joints in
movement, image enhancement techniques for better visibility, machine-learning
models for overall posture analysis, and custom visualization mechanisms for
displaying results.

Lastly, testing scenarios should be carried out prior to deployment to guarantee optimal
accuracy from the solution. These steps must all be managed correctly with an eye
towards correctly assessing a person's body posture during various gym exercises in
real-time.

[26]
Steps:

1. Open Anaconda Prompt

2. Type jupyter notebook in Anaconda Prompt

3. Starting of Python 3 Kernel

[27]
4. Jupyter notebook start on the localhost server in the browser. Click on the file
name Gym Exercise Body Posture Detection.ipynb

5. Click on Run All option in the Cell tab

[28]
6. Exercises buttons displayed after successful executing all cells.

7. Click on button. Mediapipe feed pops up with enabled camera.

[29]
Chapter 7

SUMMARY
7 SUMMARY

The Gym Exercise Body Posture Detection project is a system that uses machine
learning and computer vision techniques to detect and analyze body postures during
exercises at a gym. The system utilizes a combination of OpenCV, MediaPipe and
BlazePose, which are open-source libraries and frameworks, to process images and
videos of the exerciser and detect key points on the person's body to analyze their
posture.

OpenCV is an open-source library for computer vision, which is used to capture


images and videos from cameras and other imaging devices. MediaPipe is a framework
for building cross-platform multimodal applied ML pipelines and BlazePose is a
machine learning model that detects and tracks keypoints of human bodies in real-
time.

The system uses these technologies to track the movements of the exerciser and detect
any deviations from the proper posture or form. This information can be used to
provide feedback to the exerciser in real-time, or to track progress over time and detect
potential injury risks. The project can also be used to create a virtual trainer that can
provide guidance and feedback on proper exercise form.

[30]
Chapter 8

CONCLUSION
8 CONCLUSION
In conclusion, the Gym Exercise Body Posture Detection project is a sophisticated
system that utilizes machine learning, computer vision, and open-source libraries and
frameworks to detect and analyze body postures during exercises. The system
combines the capabilities of OpenCV for image and video capture, MediaPipe for
building cross-platform multimodal applied ML pipelines, and BlazePose, a machine
learning model that detects and tracks keypoints of human bodies in real-time.

This project has the potential to revolutionize the way people exercise by providing
real-time feedback on proper form and posture, tracking progress over time and
detecting potential injury risks. It can also be used to create a virtual trainer that can
provide guidance and feedback on proper exercise form.

Overall, the Gym Exercise Body Posture Detection project demonstrates the powerful
capabilities of machine learning and computer vision in improving the effectiveness
and safety of exercise. It's a great example of how these technologies can be used to
enhance human performance in various fields.

[31]
Chapter 9

REFERENCES
9 REFERENCES

[1]. Anaconda Distribution. https://www.anaconda.com/products/distribution.

[2]. Bazarevsky, Valentin, et al. “Blazepose: On-Device Real-Time Body Pose


Tracking.” ArXiv.org, 17 June 2020,
https://arxiv.org/abs/2006.10204#:~:text=We%20present%20BlazePose%2C%
20a%20lightweight,on%20a%20Pixel%202%20phone.

[3]. “Home.” OpenCV, 18 Jan. 2023, https://opencv.org/.

[4]. “Jupyter Project Documentation#.” Jupyter Project Documentation - Jupyter


Documentation 4.1.1 Alpha Documentation, https://docs.jupyter.org/en/latest/.

[5]. “Learn.” Scikit, https://scikit-learn.org/stable/.

[6]. “Pose.” Mediapipe, https://google.github.io/mediapipe/solutions/pose.

[7]. Computer VisionDeep LearningMachine PerceptionOn-device Learning. “On-


Device, Real-Time Body Pose Tracking with MediaPipe Blazepose.” – Google
AI Blog, https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-
tracking.html.

[8]. "Real-time posture detection and recognition using RGB-D data" by J. Shotton,
A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A.
Blake in the Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2011.

[9]. "Real-time human pose recognition in parts from single depth images" by X.
Liu, Y. Wu, Z. Wang, Y. Qiao, and X. Tang in the IEEE Transactions on
Pattern Analysis and Machine Intelligence (PAMI), 2017.

[10]. "Real-time 2D human pose estimation using part affinity fields" by Z. Cao, T.
Simon, S.-E. Wei, and Y. Sheikh in the Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2017.

[11]. "Real-time multi-person 2D pose estimation using part affinity fields" by Z.


Cao, G. Hager, and T. Darrell in the Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2018.

[32]
Chapter 10

APPENDICES
10 APPENDICES
10.1 APPENDIX A (Source Code)
!pip install mediapipe opencv-python
import cv2
import mediapipe as mp
import numpy as np
mp_drawing = mp.solutions.drawing_utils

mp_pose=mp.solutions.pose
def calculate_angle(a,b,c):
a = np.array(a) # First
b = np.array(b) # Mid
c = np.array(c) # End

radians = np.arctan2(c[1]-b[1], c[0]-b[0]) - np.arctan2(a[1]-b[1], a[0]-b[0])


angle = np.abs(radians*180.0/np.pi)

if angle >180.0:
angle = 360-angle

return angle
from IPython.display import display
import ipywidgets as widgets

def bicep_Curl(arg):
cap = cv2.VideoCapture(0)

# Curl counter variables


counter = 0
stage = None

## Setup mediapipe instance


with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5
) as pose:
while cap.isOpened():
ret, frame = cap.read()

# Recolor image to RGB


image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False

# Make detection
results = pose.process(image)

[33]
# Recolor back to BGR
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

# Extract landmarks
try:
landmarks = results.pose_landmarks.landmark

# Get coordinates
shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value
].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y]
elbow = [landmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].x,lan
dmarks[mp_pose.PoseLandmark.LEFT_ELBOW.value].y]
wrist = [landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].x,land
marks[mp_pose.PoseLandmark.LEFT_WRIST.value].y]

# Calculate angle
angle = calculate_angle(shoulder, elbow, wrist)

# Visualize angle
cv2.putText(image, str(angle),
tuple(np.multiply(elbow, [640, 480]).astype(int)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.L
INE_AA
)

# Curl counter logic


if angle > 160:
stage = "DOWN"
if angle < 30 and stage =='DOWN':
stage="UP"
counter +=1
print(counter)

except:
pass

# Render curl counter


# Setup status box
cv2.rectangle(image, (5,5), (200,90), (0,0,0,0))

# Rep data
cv2.putText(image, 'REPS', (15,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, str(counter),
(20,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

[34]
# Stage data
cv2.putText(image, 'STAGE', (110,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, stage,
(100,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

# Render detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POS
E_CONNECTIONS,
mp_drawing.DrawingSpec(color=(0,0,255), thickness=2, circl
e_radius=2),
mp_drawing.DrawingSpec(color=(0,255,0), thickness=2, circl
e_radius=2)
)

cv2.imshow('Mediapipe Feed', image)

if cv2.waitKey(10) & 0xFF == ord('q'):


break

cap.release()
cv2.destroyAllWindows()

def front_Raise(arg):
cap = cv2.VideoCapture(0)

# Curl counter variables


counter = 0
stage = None

## Setup mediapipe instance


with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5
) as pose:
while cap.isOpened():
ret, frame = cap.read()

# Recolor image to RGB


image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False

# Make detection
results = pose.process(image)

# Recolor back to BGR


image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

[35]
# Extract landmarks
try:
landmarks = results.pose_landmarks.landmark

# Get coordinates
shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value
].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y]
wrist = [landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].x,land
marks[mp_pose.PoseLandmark.LEFT_WRIST.value].y]
leftHip = [landmarks[mp_pose.PoseLandmark.LEFT_HIP.value].x,landma
rks[mp_pose.PoseLandmark.LEFT_HIP.value].y]

# Calculate angle
angle = calculate_angle(leftHip,shoulder, wrist)

# Visualize angle
cv2.putText(image, str(angle),
tuple(np.multiply(shoulder, [640, 480]).astype(int)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.L
INE_AA
)

# Curl counter logic


if angle > 95:
stage = "UP"
if angle < 15 and stage =='UP':
stage="DOWN"
counter +=1
print(counter)

except:
pass

# Render curl counter


# Setup status box
cv2.rectangle(image, (5,5), (200,90), (0,0,0,0))

# Rep data
cv2.putText(image, 'REPS', (15,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, str(counter),
(20,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

# Stage data
cv2.putText(image, 'STAGE', (110,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)

[36]
cv2.putText(image, stage,
(100,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

# Render detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POS
E_CONNECTIONS,
mp_drawing.DrawingSpec(color=(0,0,255), thickness=2, circl
e_radius=2),
mp_drawing.DrawingSpec(color=(0,255,0), thickness=2, circl
e_radius=2)
)

cv2.imshow('Mediapipe Feed', image)

if cv2.waitKey(10) & 0xFF == ord('q'):


break

cap.release()
cv2.destroyAllWindows()
def shoulder_Press(arg):
cap = cv2.VideoCapture(0)

# Curl counter variables


counter = 0
stage = None

## Setup mediapipe instance


with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5
) as pose:
while cap.isOpened():
ret, frame = cap.read()

# Recolor image to RGB


image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False

# Make detection
results = pose.process(image)

# Recolor back to BGR


image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

# Extract landmarks
try:
landmarks = results.pose_landmarks.landmark

[37]
# Get coordinates
shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value
].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y]
wrist = [landmarks[mp_pose.PoseLandmark.LEFT_WRIST.value].x,land
marks[mp_pose.PoseLandmark.LEFT_WRIST.value].y]
leftHip = [landmarks[mp_pose.PoseLandmark.LEFT_HIP.value].x,landma
rks[mp_pose.PoseLandmark.LEFT_HIP.value].y]

# Calculate angle
angle = calculate_angle(leftHip,shoulder, wrist)

# Visualize angle
cv2.putText(image, str(angle),
tuple(np.multiply(shoulder, [640, 480]).astype(int)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.L
INE_AA
)

# Curl counter logic


if angle > 165:
stage = "UP"
if angle < 45 and stage =='UP':
stage="DOWN"
counter +=1
print(counter)

except:
pass

# Render curl counter


# Setup status box
cv2.rectangle(image, (5,5), (200,90), (0,0,0,0))

# Rep data
cv2.putText(image, 'REPS', (15,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, str(counter),
(20,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

# Stage data
cv2.putText(image, 'STAGE', (110,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, stage,
(100,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

[38]
# Render detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POS
E_CONNECTIONS,
mp_drawing.DrawingSpec(color=(0,0,255), thickness=2, circl
e_radius=2),
mp_drawing.DrawingSpec(color=(0,255,0), thickness=2, circl
e_radius=2)
)

cv2.imshow('Mediapipe Feed', image)

if cv2.waitKey(10) & 0xFF == ord('q'):


break

cap.release()
cv2.destroyAllWindows()
def front_Squat(arg):

cap = cv2.VideoCapture(0)

# Curl counter variables


counter = 0
stage = None

## Setup mediapipe instance


with mp_pose.Pose(min_detection_confidence=0.5, min_tracking_confidence=0.5
) as pose:
while cap.isOpened():
ret, frame = cap.read()

# Recolor image to RGB


image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image.flags.writeable = False

# Make detection
results = pose.process(image)

# Recolor back to BGR


image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

# Extract landmarks
try:
landmarks = results.pose_landmarks.landmark

# Get coordinates
shoulder = [landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value
].x,landmarks[mp_pose.PoseLandmark.LEFT_SHOULDER.value].y]
leftHip = [landmarks[mp_pose.PoseLandmark.LEFT_HIP.value].x,landma

[39]
rks[mp_pose.PoseLandmark.LEFT_HIP.value].y]
letfKnee = [landmarks[mp_pose.PoseLandmark.LEFT_KNEE.value].x,lan
dmarks[mp_pose.PoseLandmark.LEFT_KNEE.value].y]
leftankel = [landmarks[mp_pose.PoseLandmark.LEFT_ANKLE.value].x,la
ndmarks[mp_pose.PoseLandmark.LEFT_ANKLE.value].y]

# Calculate angle
angle = calculate_angle(leftHip, shoulder,letfKnee)
angle2 = calculate_angle(leftHip,letfKnee,leftankel)

# Visualize angle
cv2.putText(image, str(angle),
tuple(np.multiply(leftHip, [640, 480]).astype(int)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.L
INE_AA
)
cv2.putText(image, str(angle2),
tuple(np.multiply(letfKnee, [640, 480]).astype(int)),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.L
INE_AA
)

# Curl counter logic


if angle < 5 and angle2 > 160:
stage = "UP"
if angle < 50 and angle2 < 70 and stage =='UP':
stage="DOWN"
counter +=1
print(counter)

except:
pass

# Render curl counter


# Setup status box
cv2.rectangle(image, (5,5), (200,90), (0,0,0,0))

# Rep data
cv2.putText(image, 'REPS', (15,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, str(counter),
(20,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

# Stage data
cv2.putText(image, 'STAGE', (110,25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 1, cv2.LINE_
AA)
cv2.putText(image, stage,

[40]
(100,70),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)

# Render detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_pose.POS
E_CONNECTIONS,
mp_drawing.DrawingSpec(color=(0,0,255), thickness=2, circl
e_radius=2),
mp_drawing.DrawingSpec(color=(0,255,0), thickness=2, circl
e_radius=2)
)

cv2.imshow('Mediapipe Feed', image)

if cv2.waitKey(10) & 0xFF == ord('q'):


break

cap.release()
cv2.destroyAllWindows()
bicep_Curlbtn = widgets.Button(description = 'Bicep Curl')
bicep_Curlbtn.on_click(bicep_Curl)
front_Raisebtn = widgets.Button(description = 'Front Raise')
front_Raisebtn.on_click(front_Raise)
shoulder_Pressbtn = widgets.Button(description = 'Shoulder Press')
shoulder_Pressbtn.on_click(shoulder_Press)
front_Squatbtn = widgets.Button(description = 'Front Squat')
front_Squatbtn.on_click(front_Squat)
display(bicep_Curlbtn)
display(front_Raisebtn)
display(shoulder_Pressbtn)
display(front_Squatbtn)

[41]
10.2 APPENDIX B (Screenshots)

Figure 9. Curl Counter

[42]
Figure 10. Front Raise

[43]
Figure 11. Shoulder Press

[44]
Figure 12. Front Squat

[45]

You might also like