You are on page 1of 8

Ranga Korivi


Over7 Years of professional experience in designing, developing, integrating and testing
software applications, which includes 4+ year of experience in various Big Data technologies
of Hadoop like Map-Reduce, Hive, Pig, HBase, Spark, Sqoop and Flume and 3+ years of
experience in Java EE.
Hands on experience in programming and implementation of Java and Python codes
with strong knowledge in Object Oriented Concepts.
Good experience in Data warehousing using different relational database management
systems like Oracle, DB2, MySQL and Microsoft SQL server in-addition to sound knowledge of
using NoSQL systems like HBase.

Proficient in using various IDEs like Eclipse, MyEclipse and NetBeans.

Expertise inimporting and exporting different formats of Data into HDFS, HBase and Hive
from different RDBMS databases and vice-versa.
Hands on experience in working with User Defined Functions in Hive and Pig using Java
and Python scripts.
Efficient in writing Map-Reduce programs using Apache Hadoop API for analyzing
structured and unstructured data.
Skilled at extraction, transformation and analysis of Big Data using Sqoop, pig and hive
Good at optimizing and debugging Hive-QL queries, Pig-Latin scripts and Map-Reduce

Expert understanding of design patterns with strong analytical skills.


Experience in Big Data solutions for traditional enterprise businesses.

Proficient in gathering requirements, analysis, validation, business requirements
specifications and functional specifications for schema creations and table creations.

Extensive experience in all phases of Software development life cycle (SDLC).

Hands on experience in tuning mappings with expertise in identifying and resolving
performance bottlenecks in various levels.

2.0. CDH 5. PROFESSIONAL EXPERIENCE . ? Motivated problem solver and resourceful team member with decent written and verbal communication skills. Oracle. Hadoop Distributions CDH 3. Databases MySQL. CDH 4. Oracle and HBase. EDUCATION ? Bachelor of Technology.0. File Formats Compressed file.1. MyEclipse and NetBeans.3.0. India. XML and JSON. Unix Shell Scripting and Oracle PL/SQL IDEs Eclipse. Text. Operating Systems Linux. punctual and trustworthy.3. Unix and Windows.0. TECHNICAL SKILLS Development Technologies JDK.? Excellent skills in analyzing system architecture usage. MS Access. MS SQL server.2. Big Data Ecosystems Hadoop-0. Hadoop-2. defining and implementing procedures ? A quick learner. Hadoop 2. JNTU. Python.

? Developed Java Map-Reduce programs on log data to transform into structured way to find user location. ? Used Pig to do transformations. Responsibilities: ? Analyze user requirements. workflow. event joins. login /logout time and spending time.Senior Hadoop Developer Progressive Insurance. ? Implement new design as per technical specifications. procedures. and problems to automate or improve existing systems and review computer system capabilities. Mayfield Village. ? Load and transform large sets of structured. ? Developed Pig Scripts for ETL kind of operation on captured data and delta record processing between newly arrived data and already existing data in HDFS. ? Optimized the Hive tables using optimization techniques like partitions and bucketing to provide better performance with Hive-QL queries. . ? Experience in using Map-Reduce programming model for Batch processing of data stored in HDFS. filter boot traffic and some pre-aggregations before storing the data onto HDFS. errors. ? Prepare technical design documents based on business requirements and prepare data flow diagrams. ? Integrated Hadoop with Oracle in order to load and then cleanse raw unstructured data in Hadoop ecosystem to make it suitable for processing in Oracle using stored procedures and functions. semi structured and unstructured data. OH Jan 2015 – Present Designing how data in Hadoop was going to be processed to make doing BI analysis on the data easier. ? Used SQOOP for importing data into HDFS and exporting data from HDFS to oracle database ? Built re-usable Hive UDF libraries for business requirements which enabled users to use these UDF's in Hive querying ? Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting. wrote a set of SQL-like (Hive) jobs implementing parts of the design and developed code that ingests gigabytes of data into Progressive's Hadoop cluster. ? Extensively used Pig for data cleansing. and scheduling limitations.

Pig. HBase.0. Responsibilities: ? Integrated. MRUnit. Junit. At Percento Technologies. which supported the smart metering system. energy efficiency. Oracle 10g. populate staging tables and store the refined data in partitioned tables in the enterprise data warehouse (EDW). ? Analyzed large amounts of raw data in an effort to create information. Pig. products. Hive. communicate and collaborate. Java. Shell Scripts. ? Worked extensively in performance optimization by adopting/deriving at appropriate design patterns of the Map-Reduce jobs by analyzing the I/O latency.Houston. customer care and a variety of analytics.3. managed and optimized utility systems. servers. very large datasets about customers. reduce time etc. ? Ensured quality integration into the overall functions of smart meters into the system data acquisition and processing. devices. ? Troubleshooting: Used Hadoop logs to debug the scripts. Compiled technical specifications that allowed IT to create data systems. This project is aimed for finding management insights to unable decision making by analyzing the past sales in order to increase Percento Technologies revenues by identifying hidden opportunities for partners to sell services. networks. ? Enabled the use of metering data for a variety of applications such as billing. Hadoop HDFS. combiner time. . Eclipse Advanced Sql and PL/SQL Senior Hadoop Developer Percento Technologies. Hive scripts.? Experienced in extending Hive and Pig core functionality by writing custom Pig UDFs using Java and Python. Map Reduce. TX Jul 2013 – Dec 2014 Percento Technologies is a networking company that transforms how people connect. applications and data. map time. outage detection and recovery. finance. fraud detection. PigUnit. ? Developed Map-Reduce programs to parse the raw data. ? Performance Optimization for Map-Reduce. and network activity represent hidden business intelligence. ? Responsible for technical reviews and gave the quick-fix solution for the customer on production defects. Environment: Big Data Platform – CDH 5. including assets. Flume. Sqoop. Which requires the processing of unstructured data and very large data sets far more quickly and at far less cost in-addition to moving the customer install base and service contracts to the Hadoop platform to provide more value in the future because of the reusability to other Percento Technologies business teams for their own initiatives.

and JDBC. ? Provide support for data analysts in running ad-hoc Pig and Hive queries ? Developed PL/SQL Procedures. Hadoop HDFS. ? Involved in testing Map-Reduce programs using MRUnit and JUnit testing frameworks. UNIX.Eclipse. Pig and Map Reduce to ingest customer behavioral data and purchase histories into HDFS for analysis. NC Jan 2013 – Jun 2013 Involved in design and development phases of Software Development Life Cycle (SDLC) using Scrum methodology.2. ? Written Map-Reduce java programs to analyze the log data for large-scale weather data sets. SQL Loader and Handled Exceptions to handle key business logic. Responsibilities: ? Involved in design and development of server side layer using XML. Pig. ? Customize parser loader application of Data migration to HBase. Functions. ? Developed UNIX & SQL script to load large volume of data for Data Mining &Data Warehousing. ? Wrote MRUnit test cases to test and debug Map Reduce programs in local machine. ? Worked with NoSQL databases like Hbase in creating tables to load large sets of semi structured data coming from various sources.CDH 4.? Developed custom UDFs in Pig Latin using Python scripts to extract the data from sensor devices output files to load into HDFS ? Worked with Importing and exporting the data using Sqoop from HDFS to Relational Database systems and vice-versa. Environment: Big Data Platform . PL/SQL. Flume.1. Servlets. Sqoop. ? Extensively used Core Java. system integration testing and enterprise user testing. Sqoop. Map Reduce. JDBC and JDK patterns using Eclipse IDE. ? Involved in unit testing. . Fine-Tuned and optimized number of SQL queries and performed code debugging. Hive. IBM DB2. Designed and coded application components in an agile environment utilizing a test driven development approach. ? Utilized PL/SQL bulk collect feature to optimize the ETL performance. Hadoop Developer BB&T. Python. and Packages using Oracle Utilities like PL/SQL. ? Developed data pipeline using Flume.

? Working in development of controller. Pig. Hadoop Developer Self Reliance Federal Credit Union. Spark. Hadoop HDFS. domain checks. respective data were aggregated at different level with data enrichments. data type checks. Hive. Buckets in HIVE. Then data were profiled against tech checks.? Involved in creating Hive tables. data range checks. NY Oct 2011 – Dec 2012 The genesis history container provides retail risk data integration from 160 countries banking data. ? Written Spark SQL queries for data analysis.1. Flume. ? Imported data using Sqoop to load data from Oracle to HDFS on regular basis. Data were processed. Once approved from business. sanity checks. Dynamic Partitions.6. The results were submitted for business evaluation. ? Worked on development of data ingestion process using FS Shell and data loading into HDFS.CDH 4. . Aggregation and enrichment of data were performed using Hive Job process. Mortgages and loans account level data were collected through standard template. validated and profiles in Hadoop and profiling results were captured in the Hive tables (External table) and persisted back to Oracle DB. ? Developed scripts and Batch Jobs to schedule various Hadoop Program. and dataprofiling and data aggregation. Oracle 10g. Java. ? Implemented Partitioning. ? Written Hive queries to parse the logs and structure them in tabular format to facilitate effective querying on the log data.XML. Responsibilities: ? Involved in analysis. Eclipse. All data from different countries were staged at Hadoop data clusters as part of retail risk data ingestion process. ? Written Hive queries for data analysis to meet the business requirements. data ingestion. ? Developed Pig UDF’s to pre-process data for analysis. ? Developed Complex and Multi-Step data pipeline using Spark. Primary retail bank products like credit card. Sqoop.0. design and development of data collection. Environment: Big Data Platform . business checks and outliers checks as requirements from FED banking regulatory standards. Batch and logging module using JDK 1. loading data and running hive queries in those data.

Application server provided transaction API for accessing data from Oracle. which proved to be more efficient. ? Designed and developed different modules as part of project using Java/J2EE. Customers can change their default Credit Card and EFT information. . Environment: Big Data Platform . Business Analysis. outlier’s checks and domain and data range validation. It also provides customers to Lookup for Phone parts. ? Developed session beans as an enterprise business service object. Pig LATIN Scripting. PIG Latin Scripting. Map-Reduce. and Oracle.CDH 3. ? Contributed to an effective order processing system and simplified the existing order process. Data Profiling and generating the Risk Aggregation report based various business entities. and Invoices. ? Mapped the business requirements and rules with the Risk Aggregation System. It allows customers to make selective payments via credit card/EFT (Electronic Fund Transfer).6. Responsibilities: ? Involved in various stages of Projects from Architecture Designing. ? Code debugging and creating Documentation for future use. ? Managed the end-to-end delivery during the different phase of the software implementation. Development. ? Designed the framework for Data Ingestion. ? Working on the automating the generation of Hive query and Map-Reduce programs.? Working in the definition of Hive query for different profiling rules like business checks. ? Involved in initial POC implementation using Hadoop – Map Reduce. Customers can view their open Orders. ? Used JDBC. and Hive Scripting. Java Developer Next Step Solutions. India Aug 2009 – Sep 2011 This is an e-commerce project for online shopping. Testing and finally Production Stage. shopping and ordering. ? Used JDBC to invoke Stored Procedures and database connectivity to ORACLE. Hive. JDK 1. ? Developed User Defined Function in java and python to facilitate data analysis in Hive and pig.

Eclipse. Oracle 9i. PL/SQL. This project is not specific to any country. . ? Developed business objects and business object helpers. response beans.0. J-Unit. ? Used Tomcat as the application server in the application. Java Developer Visual Soft.3. India Jun 2008– Jul 2009 This project attempts to automate the income tax and the sales tax procedures.Oracle. JavaScript. JBOSS.3. XML. ? Implemented business delegate pattern to separate view from business process. CSS. system. Servlets. ? Worked on the technical design to conform the framework. ? Used JavaScript for client side validations ? Used Cascading Style Sheets in the application. and validation testing. ? Developed PL/SQL stored procedures. Rational Clear Case. Environment:JDK1. UNIX.1. HTML. but to display the working of the model. ? Developed JSPs. JDBC. Environment:Core Java. ? Extensively used XML to code configuration files.? Involved in developing the unit test classes using J-Unit. MVC framework. ? Used CVS for version control integrated with WSAD. ? Involved in Development of User Interface using and JSPs. which interact with middleware stubs.Tomcat application server. Rational Rose. WSAD. JSP. JDBC. ? Performed functional. form beans. the guidelines and business rules specific to the Indian government’s tax procedures have been considered. ? Developed complete Web tier of the application with Struts MVC framework. integration. EJBs. JUnit. action classes. Responsibilities: ? Gather user requirements and followed by analysis and design. EJB 2. Apache Struts 1. ? Coded Servlets for the Transactional Model to handle many requests. EJB. triggers. JDK 1.