You are on page 1of 2

Introduction :

To enhance our defect detection stack with the latest research and SOTA models, we
have decided to explore MMSegmentation and benchmark its capabilities with our
datasets.

Link: https://github.com/open-mmlab/mmsegmentation

Skills Required:
1. Good understanding and proven experience in the Machine Learning and Deep
learning domain.
2. Proven experience in Python.
3. Good knowledge of repository management tools.
4. Ability to write good, readable and re-usable codes/scripts.
5. Good knowledge of Computer Vision and the use cases it can address.

Milestone 1 - $100
To explore documentation and repositories to summarize the model attributes like:
1. Framework for library/model.
2. Trade-off between accuracy and speed.
3. Model training feasible with custom data format.
4. Model optimization possibility (If yes, using which toolkit or framework)
5. Input/Output formats of data and model.
6. Model evaluation metrics and methodologies.

Deliverables
1. Based on speed and accuracy tradeoff, finalize two models: One with higher
accuracy and appropriate level of latency, another with slightly lower accuracy
but low latency. I.e. Try to find a balance between speed and accuracy.
2. Timelines for the proper script implementation for both training and inferencing.
3. Summary report for milestone 1.

Milestone 2 - $200
Script Implementation for training and inference using custom dataset.

Deliverables
1. Fully working training and inference scripts.
2. Proper documentations and codes/scripts pushed into Qualitas Repo
management tool.
3. The scripts should be made in a generic way where different parts of codes can
be re-used or modified as per requirement.

Milestone 3 - $200
Benchmarking with multiple datasets, keeping a test dataset separately for each
experiment.

Deliverables
1. Experiment with multiple datasets and different parameters and hyper-params
and compile a table of results.
2. Analyze the experiment results and conclude benchmarking.
3. Share all relevant logs, model files at the end of benchmarking.

Table of results should contain following entries for each experiments:


A. Start time
B. End time
C. Dataset distribution in Train, Valid and Test set as well as dataset distribution as
per defects/classes.
D. Parameters and hyper-params like Epochs, Image size, batch size, learning rate,
augmentation parameters.
E. Final evaluation metrics like Accuracy, Precision, Recall, mIoU etc.

Milestone 4 - $200
Optimization of models with Frameworks like OpenVINO, TensorRT, TFLITE, ONNX
conversion.

Deliverables
1. Script implementation for model optimization.
2. Perform Optimization and compare results with the Raw model.
3. Share all relevant logs, model files at the end of optimization.

You might also like