Lakshay Goyal is a big data developer with 3 years of experience designing, implementing, testing, deploying and maintaining Hadoop clusters. He has worked extensively with Scala, Hive, Hadoop and Spark and is looking for opportunities to contribute positively to building intuitive products. He has a BTech in computer science and experience with projects involving a common data lake, credit risk data migration, and avalon data transformation.
Lakshay Goyal is a big data developer with 3 years of experience designing, implementing, testing, deploying and maintaining Hadoop clusters. He has worked extensively with Scala, Hive, Hadoop and Spark and is looking for opportunities to contribute positively to building intuitive products. He has a BTech in computer science and experience with projects involving a common data lake, credit risk data migration, and avalon data transformation.
Lakshay Goyal is a big data developer with 3 years of experience designing, implementing, testing, deploying and maintaining Hadoop clusters. He has worked extensively with Scala, Hive, Hadoop and Spark and is looking for opportunities to contribute positively to building intuitive products. He has a BTech in computer science and experience with projects involving a common data lake, credit risk data migration, and avalon data transformation.
Hi, I'm Lakshay a big data developer with 3 years of
experience in designing, implementing, testing, deploying and maintaining hadoop clusters. I have KEY SKILLS OTHERS worked extensively using Scala, Hive, Hadoop and Spark, now trying to get to know more about java and nodejs . Looking for good opportunities where I can contribute positively to build intuitive products that PYTHON LINUX BASH HBASE IMPALA Phoenix GIT JENKINS help people live more sustainable lives. SCALA SPARK HADOOP HIVE
EDUCATION : BTECH IN CSE
DOMAINS : [ BFSE ] EXPERIENCE JULY 2018 - NOW ITC INFOTECH
PROJECTS SNAPSHOT Common-Layer Avalon
Credit Risk Data Migration
Common Layer acts as Data Lake used for all the ETL, migration, Avalon is a strategy that underpins the transformation of Finance, Credit Risk data warehouse is a system used in Santander to store reporting and auditing purposes in the organization. I was involved in Account & Risk functions. The overall goal for Avalon remains to deliver balances, financials and is used by Risk team to prepare reports in SAS analyzing banking reporting and audit requirements, development of a single granular data repository for financial, accounting data providing for end users. The initiative involved creating Credit Risk data in Data migration within different systems, shell scripting for automatic data traceability, transparency, automation and new reporting and analytical Lake to save the licensing cost of existing CRDW in Oracle and pipelines which are ingested in Data Lake. Our team consisted of 4 capabilities. I was involved in analyzing bank data for possible regulatory Mainframe, and improve the efficiency of the whole batch. I was members and we followed scrum. outcomes for reporting on day-to-day basis, transforming of raw layer involved in analysis and design of target table structure of CRDW and data, improving batch timings and efficiency of the process and also on writing queries to generate on transformation tracker in HiveQL, writing process for migrating from Spark 1.6 to Spark 2.2 including new data scala/spark code to do the transformations , creating new validation pipeline structure, new validation process for scala/spark developments. libraries for Cucumber .Our team consisted of 7 members and we Our team consisted of 5 members and we followed scrum. followed scrum.