You are on page 1of 3

Baladasu A Email: baladasu92@gmail.

com
Hadoop/Spark Developer Mobile: +91-9513129406

Career Objective:

Having around 4.6+ Years of IT experience in Core java, Hadoop & Spark. Looking to work with an
esteemed organization that gives me an opportunity to utilize my existing analytical skills and help
make better business decisions.

Technical Summary:

 Having around 4.6+ years of IT experience in Core Java, Hadoop & Spark.
 Having 2+ Years of Experience on Big Data Hadoop (Map Reduce, HDFS, YARN, Hive, SQOOP)
 Having 1+ Year of Experience on Spark with Scala.
 Hands on experience on PySpark & Spark with Java.
 Having 1 Year & 6 Months of Experience on Core Java, JDBC & SQL.
 Experience in designing and implementing Big Data projects using Hadoop Ecosystems like
MapReduce, Hive, Pig, Sqoop and Spark with Scala.
 Experience in analysing data using Hive Scripts, PIG Latin, and custom MapReduce programs.
 Worked on data processing and transformations and actions in spark by using Scala, Python.
 Implemented Partitioning, Dynamic Partitions, Buckets in Hive.
 Developed SQOOP for importing the data from the ORACLE, MYSQL environment to HDFS
environment.
 Good Knowledge of RDBMS & SQL queries concepts.
 Conceptual Knowledge on Kafka, HBase.
 Having work experience in different kinds of projects like Implementation, Development and
support of projects in Hadoop.
 Working Experience on Hadoop Distributions like Cloudera and Hortonworks.
 Good Knowledge on Cent OS and Ubuntu.
 Well understanding of Hadoop technologies.
 Efficient in Core Java, and Object-Oriented Programming and collection Concepts.
 Exceptional ability to learn new concepts, good to perform as part of Team.
 Hardworking and enthusiastic. 

Technology Summary:

 Hadoop Ecosystem’s : Apache Hive, Sqoop, Pig, Spark Core & Spark SQL.
 Programming Language’s : Core Java, Scala, and Python.
 Databases & NoSQL Db’s : SQL & HBase
 IDE’s : Eclipse, Scala IDE, Net Beans

Professional Summary:

 Working as Senior Software Engineer in Impetus Infotech (India) Pvt Ltd, Bangalore from 6 th
August 2018 to till date.
 Worked in CSC India Private Ltd from Jan 2015 to May 2018.

Education:
 B. Tech from VIST college affiliated to JNTU Hyderabad.
Project3:
Productiom Credit Reporting:

Role : Hadoop/Spark Developer


Duration : 6th Aug 2018 to till date.
Environment : Hadoop 2.x, Cloudera, Hive, Sqoop, Spark with Scala.
Client : Citi Group.
Team Size : 5.

Description:

PCR is a Production Credits calculation, which captures banking information from regional
system. This project focuses on getting data about banking transactions from different sources. The
transactions are then analyzed using Map reduce program to find different user patterns. A
visualization layer help user to track his financial activity in many scenarios. System captures the
data through CSV files and direct database pull from approximately 60 different front office systems
and manual sources throughout the world(US Citi group).

Responsibilities:

 Involved in loading data into Hadoop cluster.


 Create and Maintains the Hive tables (Manage & External tables).
 Pre-processing using Hive
 Storing and retrieved data using Queries in Hive.
 Used partitioning & Bucketing In Hive for fast performance.
 Implemented Hive tables and HQL Queries for the reports.
 Involved in Single insertion and multiple insertion in Hive.
 Involved in database connection by Using SQOOP.
 Archiving files to HAR file.
 Worked on Spark Core and Spark SQL to improve the performance.

Project 2:

Client : Verizon
Period : April 2018 to Ju 2018
Team Size : 8
Environment : Hadoop, Teradata, Hive
Role : Hadoop Developer

Description:

Verizon project is all about to provide a data migration service from Teradata to Hive of Verizon
Queries which takes most of CPU utilization by using IDW tools which having features like Workload
Assessment, Workload Migration, Transformation, Execution, Validation.

Roles and Responsibilities

 Involved in analysing the Teradata queries


 Written Hive queries and converted Teradata queries to Hive queries
 Performed testing of queries and optimized the Hive queries.
 Performed Execution of optimized Hive queries.
Project 1:

Client : American Express


Period : Jan -2015 to Mar 2018
Team Size : 06
Environment : Core Java, Sql and Big Data.
Role : Hadoop/Java Developer.

Description:

American Express is the leading Bank providing various financial services and solutions to corporate
and customers. Cornerstone is Data Lake for American Express. Data Lake holds the data from
various SOR’s (source of records) and it is used as single Repository for maintaining any analytical
related data in Amex. As part of the translation of data provided by SOR to the final storage
structure in cornerstone, the data goes through multiple layers of processing and finally stored in
Hive warehouse.

Roles and Responsibilities

 Worked as a Team Member for the Statement Module


 Coding using Core Java.
 Client Interaction (Interaction with Client for their requirements and Specialists)
 Unit testing of the components, Maintenance of the code and components.
 Involved in loading data into Hadoop cluster

Candidate Signature Location

You might also like