You are on page 1of 4

M Srilakshmi

Email: mannemala1401@gmail.com
Phone: +91- 7981598455
Azure Data Engineer

Summary of Skills:

 Having 7+ years of experience in IT.


 Having 4.5 years of experience in Microsoft Azure Cloud technologies.
 Hands-on experience in Azure Data Lake Store (ADLS), Azure Data Factory (ADF), Azure
Blob storage, data bricks, Azure Synapse etc.
 Excellent knowledge of ADF building components – Integration Runtime, Linked Services,
Data Sets, Pipelines, Activities.
 Orchestrated data integration pipelines in ADF using various Activities like Get Metadata,
Lookup, For Each, Wait, Execute Pipeline, Set Variable, Filter, until, etc.
 Implemented dynamic pipeline to extract the multiple files into multiple targets with the
help of single pipeline.
 Automated execution of ADF pipelines using Triggers
 Implemented dynamic pipeline to extract the multiple files into multiple targets with the
help of single pipeline
 Manage data recovery for Azure Data Factory Pipelines
 Hands-on experience in Azure Data factory and its Core Concept's like Datasets, Pipelines
and Activities, Scheduling and Execution
 Extensively Worked on Copy Data activity
 Have good experience working with Azure Blob and Datalake storage and loading data into
Azure Synapse
 Pipeline execution methods (Debug vs Triggers)
 Monitor and manage Azure Data Factory
 Extensively used ETL methodology for supporting Data Extraction, Transformation and
processing of data from Source like Oracle, SQL Server, Files using Azure Data Factory into
Azure Data Lake Storage

EDUCATION:

 B.Tech (ECE) from JNT University Anathapur.


WORK EXPERIENCE:

 Worked in Publicis Sapient from Jan 2022-Apr 2023.


 Worked in HCL Technologies from Feb 2018 - Jan 2022.
 Worked in Wipro Technologies from Jan 2017-Jan 2018.

PROJECT DETAILS:
Project 1:

Project Name: European Datawarehouse


Role: Software Engineer
Environment: Microsoft Azure SQL Server, Azure data factory, Azure Blob, Azure Synapse,
Databricks .

Project Details:

This project is used for calculating and reporting finance data related to revenue for the industry.
This task consists of source providing different files containing sale, expenses and operational
which is then loaded into database using ETL/ELT tool and reports were designed on top of it.

Responsibilities:

 Analyse, design and build modern data solutions using Azure PaaS service to support
visualization of data. Understand current production state of application and determine
the impact of new implementation on existing business processes.
 Extract, transform and load data from source systems to azure data storage services using
a combination of azure data factory, T-SQL, U-SQL. Data ingestion to one or more azure
services (Azure Data Lake, Azure storage, Azure SQL, azure DW).
 Created pipelines in ADF using linked services/datasets/pipelines to extract, transform and
load data from different sources like Azure SQL data warehouse, Blob storage, Azure SQL,
write-back tool and backwards.
 Responsible for developing ETL flow which included table design, stored procedure,
function to convert data into meaningful insights.
 Developing and maintaining SSAS cube for ad hoc reporting as per the client requirements.
 Accountable for analysing User’s queries and resolving their issues.

Project 2:
Project Name : Suncorp
Role : Software Engineer
Environment: Azure Data Factory, Data Lake, Azure Sql Database, Databricks, Blob Storage
Responsibilities:
 Ingested the source data from on-premises to cloud environment using self-hosted IR with
ADF as an ETL tool into data lake gen2 using activities like Lookup, For Each, copy activity
and scheduled the pipelines as per business requirement.
 Integrating the data bricks notebooks with ADF.
 Gather business requirements from the Business Partners.
 Configuring the key vaults.
 Created datasets for Azure SQL Data Warehouse and Azure BLOB storage.
 Created Azure Data Factory pipelines to copy data from On-Premise sources to Azure BLOB
container as a CSV files then same loaded into Azure SQL Data Warehouse table.
 As part of dynamic pipeline development, we had written JSON expressions inside activities
and datasets.
 Configuration of Azure Cloud services that includes Data Lake Storage, Azure SQL DB
 Scheduling ADF pipelines by creating scheduled triggers.
 Creating a various ADF pipeline to achieve the business scenario.

Project 3:
Project Name : Health Plan Services
Role: Software Engineer
Environment: Microsoft Azure SQL Server, Azure data factory, Data Lake Storage.

Responsibilities:

 Extracted the data from different Source Systems, Transformed, Cleansed and Validated
and load the into Destinations.
 Developed pipelines in azure data factory to fetch data from different sources and loaded it
into Azure SQL Database and AZURE DATALAKE STORAGE Gen 2.
 Extracted the Data from ADLS Gen 2, transformed and load the data into Azure SQL DB.
 Created the Linked Services for various Source Systems and Target Destinations.
 Created the Datasets for various Source Systems and Target Destinations.
 Implemented Incremental load strategy for loading on daily basis.
 Parameterized the Datasets and Linked Services using the Parameters and Variables.
 Monitor and manage Azure datafactory.

You might also like