You are on page 1of 3

Azure Data Engineer

Email: dataengineer9000@gmail.com Mobile No: 7780263601

Name: Mallikarjuna
______________________________________________________________
Professional Summary:
 Having 4.1 years of experience in IT industry and 2.8 years of experience in Azure cloud,
Azure data factory, Azure Data bricks, spark, Pyspark, Data Analysis, Maintenance, Testing
and documentation and 1.4 years of experience in SQL Sever ,SSIS and Power BI.
 Creating pipeline and defining end to end data driven workflows using Azure data
factory.
 Hands on Experience on scheduling and monitoring pipelines.
 Having good Experience on Activities and Triggers.
 Good understanding on Integration runtime to connect different services.
 Having good Experience on Data bricks, Python and SQL Server.
 Familiar with Data Warehousing using Data Extraction, Data Transformation, Data
Loading(ETL).
 Orchestrated data integration pipelines in ADF using various Activities like Get
Metadata, Lookup, For Each, Wait, Execute Pipeline, Set Variable, Filter, until Etc.
 Extensive Experience on Data Extraction, Data Transformation and Data Loading(ETL).
 Extensive Experience on Data warehousing concepts like Data mart, Dimensions and
Facts
 Able to quickly learn new concepts and technologies.
Professional Experience:

 Working as a Software Engineer in Mount point Technologies pvt ltd, deployed at


client (Luxoft), from Dec 2019 To Till date
Education Qualification:
B.Tech From JNTU,anathapuram

Technical Skills:
Languages SQL, Pyspark, Azure SQL
Azure blob store, ADLS, Azure Data
Cloud Technologies Factory, Azure analysis service, SQL server
integration service (self-hosted), Data
migration (cloud and on-
premises platforms)
Tools &Utilities MS SQL server management studio,
Microsoft Azure Storage Explorer. ETL,
python, Pyspark.
Career Summary:
PROJECT 1: Nestle ( April 2022 to Dec 2023)
Team Size 6
Role Data Engineer
Environment Azure Data Factory, Azure Data Bricks, Azure SQL, Azure Data Lake Gen2, Azure Blob
Storage

Roles and Responsibilities:


 Migrated SQL scripts from SQL Server 2008R2 to SQL Server 2017.
 Participated in requirement meetings and scrum calls to understand the report
requirement.
 Created Data factory by using Python.
 Created Dimension tables and fact tables in SQL Server 2017.
 Customer interaction sessions for status update and requirement clarifications.
 Worked on supporting operations of various loads and processes.
 Scheduled the ETL Package (Daily) Using SQL Server 2012 Management Studio.
 Extensively used Derive Column, Data Conversion, Conditional Split and Aggregate.
 Scripted OLAP database backup and scheduled a daily backup.
 Data were completely processed by stored procedures.
 Handled space issues in SQL Server instance along with DBA user.
 Worked on various query performance tuning on legacy systems
 Validated the Existing Mappings, which were producing errors and modifying them
to produce correct results.
 Writing/modifying T-SQL scripts for database modifications (triggers, views, stored
procedures).

PROJECT 2: FORD ( April 2021 to March 2022)


Team Size 5
Role Data Engineer
Environment Azure Data Factory, Azure Data Bricks, Azure SQL, Azure Data Lake Gen2, Azure Blob
Storage

Roles and Responsibilities:


 Involved in production support activities.
 Worked on Azure Data Factory deployment and data integration & automation using
python
 Monitoring azure data factory pipeline in different environments.
 Responsible for monitoring the Remedy to avoid response & resolution SLA.
 Basic analysis provided for the remedy incidents and job failures.
 Support and maintenance of existing jobs.
 Deployed the codes to multiple environments with the help of CI/CD process
and worked on code defect during the SIT and UAT testing and provide supports
to data loads for testing; Implemented reusable components to reduce manual
interventions.
 Workedindependentlybyingestingdataintoazuredatalakeandprovidefeedbacksb
asedonreference architecture, naming conventions, guidelines and by following
best practices.
 Performing ELT processes in Azure SQL Server based on the business rules.
 Interacting with clients and all stakeholders, Status reporting.
 Build Scripts to increase the efficiency and to automate repetitive tasks.
 To connect to both SAP and Oracle to move data to ADLS.
 To validate data using Data bricks notebooks and to send data to ADLS, BLOB and
ADLSGen2
 To create Azure Data factory pipelines, Azure Data Bricks note books and clusters.
 To copy data from BLOB to ADL Sand from ADLS to ADLS using Azure Data factory.
 To pull data using API connection and send to BLOB storage.
 To delete files from BLOB automatically once in a week.
 Manage data recovery for Azure Data Factory Pipeline.

PROJECT 3: Sydney Trains ( Dec 2019 To march 2021)

Team Size 8
Role Software Engineer
Environment SQL Server 2017

 Migrated SQL scripts from SQL Server 2008R2 to SQL Server 2017.
 Created Dimension tables and fact tables in SQL Server 2017.
 Scheduled the ETL Package (Daily) Using SQL Server 2012 Management Studio.
 Monitoring azure data factory pipeline in different environments.
 Monitoring azure data factory pipeline in different environments.
 Manage data recovery for Azure Data Factory Pipeline.
 Performing ELT processes in Azure SQL Server based on the business rules.
 Handled space issues in SQL Server instance along with DBA user.
 Monitoring azure data factory pipeline in different environments.

You might also like