You are on page 1of 9

Jyoti Neupane

703-953-7734
jyothineupane@gmail.com
Manassas, VA - 20110
US Citizen
https://www.linkedin.com/in/jyoti-n-78b560145/
Senior Big Data Hadoop Admin

Hadoop Architect/Data EngineerProfessional Certifications


Hortonworks Hadoop HDP Certified Administrator (HDPCA)

Education
 B.B.S. (Bachelors in Business Studies)

Professional Summary

 Overall around 14+ Years of rich IT Experience and Expertise which includes AWS Cloud Architecting, System
Administration, DevOps Engineering and Developing, AWS Big Data, Hortonworks and Cloudera Hadoop
Administration, Development and Data Analysis, Project Management/Coordination, Infrastructure planning,
architecture and support, System Administration and Application Development of complex ERP/PeopleSoft
Financial Applications, Middleware, SOA and WebLogic Application Server administration, Oracle HTTP,
Apache, Tomcat and Sun ONE Web Server Installation, Configuration, Administration and Development
support, Enterprise Relational Data Modeling, Oracle Database Administration and Development, IBM DB2
UDB Administration, Data Warehousing using Informatica, SQL, SQR, Java, Python, C++, Perl, sRuby and UNIX
Programming.
 Extensively worked with administering, developing and data analysis in Apache Hive, Pig, Presto, Spark, Sqoop
and Flume in Hortonworks, Cloudera and AWS Cloud environments.
 Administered medium to large scale Hadoop Clusters using Ambari for Hortonworks HDP and Cloudera
Manager for Cloudera CDH versions.
 Extensively worked with AWS EC2, IAM, VPC, Elastic Beanstalk, Lambda, S3, RDS, DynamoDB, ElastiCache
Redis, Redshift, EMR, Kinesis, Elasticsearch, SNS, SQS, API Gateway, CloudWatch, CloudFormation, CloudTrail,
OpsWorks, CodeCommit, CodeBuild, CodeDeploy, CodePipeline, Jenkins, GitHub, Chef, Puppet, Ansible
 Experience with SDLC and Agile Project Management methodologies including Scrum and Kanban.
 Extensively worked with Enterprise Business teams, Application/Development teams, Testing teams,
Production Support teams, vendors, consultants, contractors and MSPs for coordinating, implementing and
supporting business critical projects/initiatives.

Details of Project Experience:

Geico, Atlanta GA
Senior Big Data Hadoop Admin January 2022 to Till Date

Roles and Responsibilities:

 Architect and administer in the installation and configuration on RHEL 7.4 and with HDFS. Provide customers
with advice on best practices for deployment of Hadoop services for their production environment.
 Installed Configuration, HA, Troubleshooting and Deployed the CDH, applications on CDH Cluster.
 Worked on installing cluster, Commissioning & Decommissioning of Data Node
 Experience in managing Hadoop services HDFS, HBase, Hive, Impala, MapReduce, Pig, and Spark, Kafka
systems.
 Job scheduling, monitoring, debugging, and troubleshooting.
 Ansible Script / Python/ Unix script development for the cluster Pre/Post Installation activities, maintenance,
and Hadoop admin job automation, DevOPS (IaaS)
 Monitored cluster for performance, networking, and data integrity issues.
 Responsible for troubleshooting issues in the execution of MapReduce/Spark jobs
 Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like
 Hive, Pig, HBase, Spark, NiFi, impala, Kafka and Sqoop.
 Setup of Apache Ranger for Data Governance and Auditing.
 Developing Data Pipelines to extract and Load data into the Bigdata Cluster using Python and Unix scripting.
 Ansible Script development for the cluster maintenance, and admin job automation like Hive, HBase, Kafka
with Control-M interface
 Experience in Encryption, TLS/SSL, LDAP, and Linux Admin
 ETL tools Apache NiFi, Ab-initio setup and troubleshoot jobs and maintenance.
 Experience with AWS (EC2, S3, EMR)

Environment: RHEL 7, R-server, Hadoop, Map Reduce, HDFS, Hive, Java, Ranger, Yarn, HDP 2.6, Map, Flat files, Oracle
11g/10g, UNIX Shell Scripting

Safeway, Boise, ID
Senior Big Data Hadoop Admin February 2020 to December 2021

Roles and Responsibilities:

 Ambari Manager setup and manage- technical expertise.


 Installed and Deployed the Spark & Apache NiFi application on Cluster.
 Knowledge or experience with AWS (EC2, S3, EMR)
 Setup NiFi for Data loading into Hadoop and involved in developing data mappings.
 Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery,
 Installed, configured, and administered huge Hadoop clusters.
 Data Migrations/transfer between Hadoop clusters using HBase export snapshots.
 Monitored cluster for performance, networking, and data integrity issues.
 Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting log files.
 Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like
 Hive, Pig, HBase, Zookeeper and Sqoop.
 Ansible Script development for the cluster Pre/Post Installation activities, maintenance, and admin job
automation.
 ETL tools Stream sets/Apache NiFi setup and troubleshoot jobs and maintenance.
 Experience with CI/CD process in particular - GIT, Jenkins, Jira, and Confluence.
 Responsible for developing data pipeline using flume, Sqoop and pig to extract the data from
 Weblogs and store in HDFS.

Thompson Reuters, MN
Senior Big Data Hadoop Admin November 2018 to February 2020

Roles and Responsibilities:

 Installed, configured, maintained, supported, and administered Cloudera/CDH Hadoop Software.


 Tested, verified, and installed all Hadoop and Operating System patches and hot fixes adhering to client
approved Change Management procedures.
 Configure and maintained XML Configuration files, shell scripts and variables.
 Configured and maintained logging services.
 Configured CDH clusters and Hadoop daemons.
 Configured Name Node, Resource Manager, HBase and other Hadoop components for High Availability
and Automatic Failover.
 Involved in implementing Kerberos security for CDH Clusters.
 Performed resource management
 Setting and maintaining HDFS quotas
 Configuring and managing Static Service Pools, Dynamic Resource Pools, Fair Scheduler and Capacity
Scheduler.
 Manage Hadoop processes, including starting and stopping processes manually or using initialization
scripts.
 Perform HDFS maintenance tasks - adding and decommissioning data-nodes, checking file- system integrity,
balancing HDFS block data, replacing failed disks, etc.
 Perform MapReduce maintenance tasks.
 Perform ongoing optimization and performance tuning.
 Configure, maintain and perform data ingestion.
 Configure and perform data and system backup and recovery.
 Troubleshoot, diagnose and implement fixes and changes to correct any issues and Incidents.
 Produce reports related to Cloudera Hadoop environment.
 Involved in installing Cloudera software, including Cloudera Manager, CDH, and other managedservices.
 Optimized CDH performance and tuned YARN, HBase, Spark, Kafka and Impala for improvedperformance.
 Involved in setting up Data Replication for HDFS, Hive and Impala and created HDFS Snapshots.
 Administered Cloudera Manager and Cloudera Navigator.
 Monitored CDH Clusters, Services, Hosts and Activities, and analyzed Cluster Utilization reportsfor effectively
troubleshooting issues and optimizing performance.
 Implemented RBAC leveraging Apache Sentry and enforced authorization policies for Hive and Impala
security.
 Implemented HDFS ACL’s for fine grained access control.
 Effectively triaged CDH Cluster production issues providing quick root cause analysis andresolution.
 Involved in Cluster Capacity Planning and Forecasting.
 Involved in setting up and deploying Oozie jobs.
 Involved in implementing Sqoop and Flume jobs.
 Upgraded Cloudera cluster from CDH 5.16 to CDH 6.2.
 Closely worked with Data Scientists, Developers, Network Admins, DBA’s and BI teams.

Environment: Cloudera CDH 5.16, CDH 6.2, Cloudera Manager, Cloudera Navigator, HDFS, Map Reduce, YARN, HBase,
Sentry, Hive, Impala, Kafka, Sqoop, Flume, Spark, PySpark, Zeppelin, Oozie, Hue, MySQL, Oracle DB

Nissan North America, USA


Senior Big Data Hadoop Admin October 2017 to November 2018

Roles and Responsibilities:

 Involved in Big Data/Hadoop Initial Proof of Concept effort for EQUIP project and evaluated Cloudera
CDH, Hortonworks HDP.
 Involved in a capacity planning to project the workload in Big Data Environment in coming years. Documented
all hardware and software configuration and collected footprint.
 Involved with various project/tenant for their deployment in Big Data Environment.
 Auotomated log rotation in Hadoop cluster to manage OS storage capacity.
 Configured Namenode, Resource Manager High availability.
 Implemented Kerberos security integrated with LDAP/AD for authentication.
 Installed and Configured System Security Services Daemon (SSSD) to sync AD/unix userauthentication.
 Installed and configured Ranger policies for hive and HDFS.
 Perform daily admin responsibilities. Monitor overall cluster health.
 Responsible for onboarding new users in Big Data Environment and granting required access.
 Troubleshooting and analyzing Hadoop cluster services/component failures and job failures.
 Extensively leveraged workflow manager to use sqoop to ingest data into HDFS from various RDBMS
sources such as Microsoft SQL Server, Oracle.
 Integrated Microsoft Machine Learning Server (R-server 9.3) onto Hadoop cluster.
 Setup oozie workflow to schedule batch ingestions.
 Configured and tuned hadoop components for the better performance.
 Configured Hive Server 2 Interactive (LLAP) for ad-hoc querry performance.
 Configured Hive Server 2 http mode.
 Performance tuning of queries involving large Hive tables achieving most improvement using techniques such
as Tez, ORC storage format, Vectorized query execution, and cost-based optimization engine
 Installed and configured IBM MQ Client to Retreive data from IBM MQ Server.
 Upgraded Ambari and HDP from 2.6.0 to 2.6.5
 Installed HDF 3.1 cluster using Ambari and secure nifi with SSL
 Kerberized HDF cluster
 Integrated AD with HDF

Environment: RHEL 7, R-server, Hadoop, Map Reduce, HDFS, Hive, Java, Ranger, Yarn, HDP 2.6, Map, Flat files, Oracle
11g/10g, UNIX Shell Scripting

Fannie Mae, VA, U.S.A.


Senior Big Data Hadoop Engineer/Architect January 2013 to October 2017

Roles and Responsibilities:

 Actively involved in Document 5.0 project implementation for transitioning on-premise case file data into
Fannie Vault/Hortonworks Hadoop Transactional Cluster on AWS primarily with Knox, Kafka, Storm,
Zookeeper and HBase as well as Analytic Cluster leveraging YARN, Pig, Hive, Spark, Sqoop, Flume, HBase and
Phoenix
 Involved in Delivery-ODS Transition to Enterprise Big Data Platform (Data Lake) with capabilities for Batch
Ingestion, Real-time/Continuous Integration, Analytical Processing, Data Processing, Streaming, Interactive,
Batch Data Extraction and Advanced Analytics
 Involved in Big Data/Hadoop Initial Proof of Concept effort for DU/Doc5.0 project and evaluated Cloudera
CDH 5.0.0, Hortonworks HDP, HBase, Cassandra, Mongo DB, Oracle XML Database and AWS Dynamo DB
 Involved in installing, configuring, performance tuning and upgrading CDH Hadoop transactional and analytic
clusters
 Worked with Cloudera Manager, installed and configured CDH Clusters
 Developed Impala/SQL scripts for data analysis
 Installed and configured Ambari Server and Ambari Agents
 Involved in setting up Kerberos and Ranger security for Hadoop Cluster
 Configured Name Node and Resource Manager High Availability
 Configured Rack topology
 Developed Unix scripts to pre-process raw data for data ingestion into Hadoop cluster
 Involved in developing and supporting ETL processes with high data quality
 Developed and automated Sqoop and Flume Autosys jobs for data ingestion
 Developed and implemented Pig, Hive QL and Impala scripts
 Performance tuning of queries involving large Hive tables achieving 10x improvement using techniques such
as Tez, ORC storage format, Vectorized query execution, and cost-based optimization engine
 Configured HiveServer2 and Hive Metastore High Availability
 Worked with Hive Static and Dynamic Partitioning, computation and maintenance of Table, Partition and
Column level statistics
 Involved in administering and configuring Hive Security and SQL standards-based authorization
 Involved in architecting Hive and Pig and integration with HBase
 Involved in performing analytics using Hive, Sqoop, MySQL DB and Datameter
 Worked with Tableau for on-prem data analysis and visualization reports
 Developed and implemented Java, Python and UNIX/Linux Shell scripts
 Involved in setting up EMR clusters, migrating data from on-premise to the AWS Cloud and performing data
analysis with Pig, Hive, Spark and Presto on HDFS and AWS S3
 Worked with Hortonworks, Cloudera and MapR distributions of Hadoop
 Installed and configured AWS Aurora DB, PostgreSQL, Dynamo DB and Redshift Database.
 Worked with AWS Quicksight and 50+ TB Redshift Datawarehouse Database for data visualization and
advanced analytics
 Involved in the analysis, design, development, implementation, deployment and support of Vendor Data
Retrieval System Spring Framework/Java based application in multi-AZ Elastic Beanstalk managed AWS Cloud
environments using Oracle RDS.
 Involved in the transformation of on-premise Transactional Data Store into AWS Cloud.
 Involved in architecting and on-boarding of the mobile application to AWS Cloud platform for the customers to
leverage real-time monitoring of automatic underwriting transactional activity.
 Involved in architecting and supporting RESTful java DynamoDB application using DropWizard as part of
Customer Digital Experience project initiative.
 Involved in setting up Amazon VPCs, Subnets, Security Groups, NACLs and Routing Tables.
 Involved in setting up and supporting Blue/Green Elastic Beanstalk Environments in AWS Cloud.
 Involved in setting up Elastic Load Balancers, EC2 instances, Cloud Watch monitoring/alerting and
AutoScaling.
 Involved in log analytics with CloudTrail and Splunk.
 Involved in setting up snapshots for EBS volumes.
 Involved in configuring S3 and Glacier life cycle policies and restricted bucket policies.
 Worked with Cloud Formation templates for creating IAM Roles and building/rebuilding AWS environments.
 Involved in configuration management, application deployment and task automation with Puppet manifests.
 Performed Security administration in IAM creating users, groups, cross account roles and custompolicies.
 Involved in performance tuning and trouble shooting issues with Redshift cluster.
 Involved in project implementations adopting Agile/SAFe Scrum, Kanban and DevOps CI/CD methodologies
and using Confluence/JIRA.
 Involved in setting up CloudFormation JSON templates, OpsWorks stacks and Chef Cookbooks.
 Involved in implementing Docker containers and managing with EC2 Container Service.
 Involved in setting up and automating DevOps CI/CD pipeline with Ruby scripting and using Git, Jenkins, Nexus
and AWS CI/CD services.
 Performed release deployments using Jenkins.
 Involved in on-boarding AWS Codedeploy, Codecommit and Codepipeline.
 Involved in conversion of RabbitMQ to AWS SQS.
 Mentored and provided Training to project and support teams in Big Data/HDP Hadoop and AWS Cloud
essentials, development and administration aspects.

Environment: AWS Elastic Beanstalk, EC2, Elasticsearch, CloudWatch, CloudTrial, S3, Dynamo DB, Amazon Aurora
RDS, API Gateway, PostgreSQL, Cloud Formation, EMR, Redshift, Kibana, Quicksight, HDFS, Map Reduce, YARN,
Pig, Hive, Sqoop, Flume, Impala, Spark, Presto, Kafka, Storm, HBase, Knox, Ambari, Cloudera Manager, Oozie, Hue,
Puppet, Oracle Solaris 10, SuSE Linux 10/11, Autosys R 11 v3/4.5.1, JDK 1.7/1.6, WebLogic 10.3.6, Apache Web
Server 2.2.3, tcServer, TIBCO EMS/ESB, Oracle Database 11g/12c, Hortonworks HDP 2.3, Cloudera CDH 5.1,
Jenkins, Git, iCart

Wells Fargo, Charlotte, NC


Hadoop Developer/Admin May 2012 to Dec 2012

Roles and Responsibilities:

 Analyzed, Designed and developed the system to meet the requirements of business users.
 Participated in the design review of the system to perform Object Analysis and provide best possible
solutions for the application.
 Imported and exported terabytes of data from HDFS to Relational Database Systems usingSqoop.
 Involved in setting up job schedules in Oozie.
 Developed Pig and Hive jobs for Map Reduce work loads.
 Collected and aggregated large amounts of log data using Apache Flume and staging data in HDFS for
further analysis.
 Developed UNIX and Python scripts for data cleaning and preprocessing.
 Developed Map Reduce (YARN) jobs for accessing and validating the data.
 Involved in managing and reviewing Hadoop log files.
 Load and transform large sets of structured, semi structured and unstructured data.
 Responsible to manage data coming from different sources
 Involved in loading data from UNIX file system to HDFS.
 Installed and configured Hive and developed Hive QL scripts.
 Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.

Environment: Hadoop, Map Reduce, HDFS, Hive, Java, Hadoop distribution of Hortonworks, Cloudera, Map,
Flat files, Oracle 11g/10g, UNIX Shell Scripting, ClearCase

Embarq / CenturyLink Corporation, KS, U.S.A.


Systems Analyst May 2010 to April 2012

Roles and Responsibilities:

 Installed, configured, tuned, monitored, migrated and administered WebLogic Server domains/ clusters for
internal and customer facing external applications in UNIX environments.
 Deployed Java and JMS applications in WebLogic Server and AquaLogic Server/Oracle Service Bus SOA
Production and Lower environments.
 Installed, configured and administered Oracle HTTP Servers, Sun ONE Web Servers and Apache/ Tomcat Web
Servers.
 Installed and Updated SSL certificates for Web Servers.
 Installed, configured, implemented and supported third party Enterprise software in corporate UNIX
environments including EMC Documentum, Symantec Enterprise Security Manager and Connect Direct.
 Performed PeopleSoft Application Installs, Upgrades, Data Conversions, Migrations, Development, Trouble-
shooting and Production support for entire suite of Financials and Supply Chain Management modules.
 Configured and supported Integration Broker and Messaging Services to interface with external Messaging
nodes including Vitria, EXE, WebStore and Siebel.
 Configured and tuned Dedicated Application Messaging Servers for high-volume and high- priority business-
critical Message Channels.
 Administered Application and user security for PeopleSoft applications.
 Upgraded Quest STAT from 5.1 to 5.4.2 and configured with PeopleSoft 9.0 environments, PeopleSoft
Staging Database and STAT Database.
 Performed PeopleSoft migration and change management using Quest STAT.
 Developed and implemented SQL scripts and UNIX Shell scripts.
 Provided team lead to the Web Apps team.
 Extensively involved in the Hadoop Proof of Concept effort with Cloudera and Hortonworks Distributions of
Hadoop.

Environment: PeopleSoft FSCM 9.1 / PeopleTools 8.50.03, PeopleSoft HRMS 9.1 / PeopleTools 8.50.03, WebLogic
9.2/10.3, Oracle Tuxedo 10gR3, Oracle Database 11g/10g, Toad, Sun Solaris 10, Windows 2003 Server, Quest STAT
5.4, Oracle HTTP Server 10g, Sun ONE Web Server 6.1, Rational ClearQuest 7.0.1, Mercury Test Director, CA
AutoSys, BMC Remedy, BMC Control-M, BMC Patrol, MS Office 2003, CDH and HDP Distributions of Hadoop

Sprint United, KS, U.S.A.


Application Administrator Oct 2009 to April 2010
Roles and Responsibilities:

 Worked with Data Conversion from DB2 UDB to Oracle Database


 PeopleSoft Application Upgrade and Data Conversion from FSCM / HRMS 8 SP1/Tools 8.19.05 running on
Oracle Database 9i to FSCM / HRMS 8.8/Tools 8.44.13 running on Oracle Database 10g.
 Administered PeopleSoft Portal and Security for Users, Roles and Permissions.
 Configured and tuned PeopleSoft Web Server, PeopleSoft Application Server and Process Scheduler.
 Configured and tuned PeopleSoft Integration Broker and PUB / SUB Message Servers.
 Installed and configured PeopleSoft FSCM / HRMS 8.8 on Linux servers for testing purpose as part of overall
IBM e-Server initiative.
 Upgraded Quest STAT from 5.0 to 5.1 and reconfigured for FSCM / HRMS 8.8 environments.
 Performed PeopleSoft migrations and change management functions in Production and lower environments
using Quest STAT.
 Developed and implemented SQL and UNIX Shell scripts.
 Worked with Development teams, online business users and production control AutoSys batch operations for
fixing issues.
 Generated reports for PeopleSoft SOX compatibility internal and external audits.
 Applied Tuxedo Middleware Rolling Patch, R291 and upgraded JRE version to 1.4.2_12 in all PeopleSoft
environments for DST compatibility.
 Installed, configured, tuned and administered WebLogic instances.
 Performed design and implementation of PeopleSoft Application / Database file system specifications and
allocations.
 Installed, created, configured and tuned Oracle databases for PeopleSoft.
 Upgraded Oracle database from version 9i to 10g.
 Worked with Oracle Enterprise Manager Database Control for Database monitoring, tuning and trouble-
shooting.

Environment: PeopleSoft FSCM / HRMS 8.8/8SP1, PeopleTools 8.44.13/8.19.05, WebLogic 8.1/5.9SP12, Tuxedo-
Jolt 6.5/8.1, Server Express COBOL Compiler 1.1/2.1, Oracle Database 9i/ 10g, Sun Solaris 8/10, Windows
2000/2003, Quest STAT 4.0.9 / 5.0, Oracle Enterprise Manager, RMAN, SQL, PL/SQL, CA AutoSys, BMC Control-M,
Mercury Test Director

Sprint United, KS, U.S.A.


Informatica Administrator Jan '09 to Sep '09

Roles and Responsibilities:

 Analyzed the Specifications and involved in identifying the source that data needs to be movedto data mart.
 Involved in analyzing scope of application, defining relationship within & between groups of data, star
schema, etc.
 Identified and created different source definitions to extract data from input sources like Flat files, SQL
Server and load into relational tables like Oracle.
 Enhanced and created various database objects in Data Mart as per changing technicalrequirements.
 Extensively involved in requirement analysis and created ETL mapping design document.
 Extensively Designed, Developed, Tested complex Informatica mappings and mapplets to load data from
external flat files and other databases.
 Extensively Worked on Informatica tools such as Source Analyzer, Data Warehouse Designer, Transformation
Designer, Mapplet Designer and Mapping Designer.
 Extensively used all the transformations like source qualifier, aggregator, filter, joiner, Sorter, Lookup, Update
Strategy, Router, Sequence Generator etc. and used transformation language likes transformation expression,
constants, system variables, data format strings etc.
 Involved in running the loads to the data warehouse and data mart involving different environments.
 Extensively worked on workflow manager and workflow monitor to create, schedule, monitor workflows,
worklets, sessions, tasks etc.
 Extensively worked on ETL performance tuning for tune the data load, worked with DBAs for SQL query tuning
etc.
 Responsible for definition, development and testing of processes programs necessary to extract data from
client's operational databases, Transform and cleanse data, and Load it into data marts.
 Extensively used PL SQL programming in backend and front-end functions, procedures, packagesto implement
business rules.
 Integrated various sources into the Staging area in Data warehouse
 Provide technical support to Quality Assurance team and Production group.
 Support Project Manager in estimating tasks.
 Provided production support to schedule and execute production batch jobs and analyzed log files.
 Extensively involved in ETL testing, Created Unit test plan and Integration test plan to test the mappings,
created test data.
 Involved in all phases of Data quality assurance.
 Developed shell scripts to automate the data loading process and to cleanse the flat file inputs.

Environment: Informatica Power Centre 8.6, Oracle 11g, SQL, UNIX Shell Scripting, Toad

Pfizer, Groton, CT
Informatica Developer May’08- Dec’08

Roles and Responsibilities:


 Extensively used Informatica power center for loading the data from various sources involving flat files,
oracle database, spread sheet etc to target (like flat file or database table).
 Creation of Mapping or mapplet using different transformations and validate the mapping.
 Creation of different transformations like lookups, Expression, joiner, Source Qualifier, Sequence Generator,
Router, Filter etc.
 Creation of session for created mapping and add the session logic like connection, log file etc. and validate the
session.
 Schedule and monitor the sessions work flows using informatica work flow manager and work flow monitor.
 Analyze the existing mappings, if it is producing errors and rectify the errors.
 If any requirement changes then modify the whole life cycle of informatica.
 Responsibility involved in Non-Function point Estimation, Impact Analysis, Defect Closing, preparation of HLD,
LLD, Test Case etc. maintain the coding standard according to the Naming convention.
 Involved in Client Handling, fix ups the defect, peer review and status updating, defect log, finalizing the unit
test cases, integration test cases and testing along with the peers.

Environment: Informatica Power Centre 8.1, Oracle 9i, SQL, UNIX Shell Scripting

You might also like