Professional Documents
Culture Documents
PROFILE
Senior Healthcare Associate-AR with over 4.6 years of experience with extensive knowledge of identifying client requirements, developing proposals,
and ensuring time-bound execution. Interested in pursuing a Data Science career with an ability to identify fine data points. Specialized in Data
cleaning, Data visualization, and Machine learning
EDUCATION
Post Graduate in Data Science and Business Analytics, Great Lakes Institute Dec 2021 – present | Chennai, TN
Bachelor of Computer Application, Madras Christian College Apr 2014 – May 2017 | Chennai, TN
KEY SKILL
Programming Language: Python (Seaborn, Numpy, Pandas, Sckit-Learn, Statsmodels)
•
PROJECT
Hackathon
Participated and ranked in the top 9th place by building the AdaBoosting algorithm with an accuracy score of 95.55% to identify the Shinkansen
•
payment defaulter
Capstone Project, Customer churn prediction for an E-Commerce company to provide segmented offers to the potential churners
Performed EDA analysis on 48K active user data using NumPy, pandas, Tableau, and seaborn to understand the churn rate and intention based
•
on various factors
Performed feature engineering and created classification models such as Logistic Regression, Naïve Bayes, Decision Tree, and Random Forest
•
The optimal model is obtained by the recall score and predicted the top 5 significant features that discriminate the customer churn rate
•
Data Visualization using Power BI, An Insurance company needs high-level insights to drive the company's policymaking
Cleaned and transformed the data using power query, further developed an Interactive dashboard with over 30K data using visualization charts
•
such as Stacked charts, Bubble charts, Treemap, Pareto charts, scatterplots, Word cloud, Line plots, Histogram, Boxplot, Circle views, and Heatmap
Utilized the Tables, Measures, Filters, and Parameters to make the dashboard even more interactive
•
card usage. An Insurance firm providing tour insurance requires to identify the reason for higher claim frequency
Solution: Applied K-Means & Agglomerative Clustering on the Bank dataset and determined the optimum clusters based on the Elbow curve,
•
Dendrogram, and Silhouette score, further built classification models such as CART, Random Forest, and Artificial Neural Network models on the
insurance firm dataset and concluded that the CART model seems to be an optimized model with an accuracy score of 97%, also provided a
recommendation to minimize the claim frequency level
Academic E-Portfolio: https://eportfolio.greatlearning.in/nisha-a
CERTIFICATE
TATA Virtual Experience Program Participant with Forage Dec 2022
Understanding the Online Retail data set and framing the business scenario questions based on company requirements
•
Cleaned and transformed the data using python. Selected the right visuals using PowerBI based on business scenarios
•
PROFESSIONAL EXPERIENCE
Senior Process Executive, Prolify Tech Aug 2020 – Jun 2022 | Chennai, TN
Compiled, cleaned, and manipulated data using Excel for proper handling
•
Contributed to the implementation of a clearing house rejection which helped decrease untimely filing denials by 75%
•
Created and maintained standard operating procedures (SOPs), and quality checklists to enable the excellent quality outputs
•
Implemented new procedures around AR flow that reduces average invoice processing time by 20%
•
Audited the activities of 6 team members on production lines for quality improvement
•
Analyzes data and determines the cause for non-payment and identifies the scope for cash collection
•