You are on page 1of 1

RAHUL BANDEWAR

Experience Summary Project Details


• 4+ years of experience in designing,
developing, deploying, and maintaining Consultant – Data Engineer, IBM (December’20 – Present)
scalable batch and streaming data Project: Data Enablement and Open Banking platform ( BEN )
pipelines using cloud-native and open- Role: Data Engineer
source. Assignment: Data Enablement for Product and Pricing module by serving the data need of pricing
• Experienced in deploying production- computation component, building streaming and batched pipeline with AWS cloud.
grade, data pipelines on GCP, AWS, and Activities:
Azure. • Participated in the detailed design discussion, developing the base framework to deal with
Certifications common functionality required for the different portfolio.
• Google Cloud Certified - Associate Cloud • Worked on building multiple batch and streaming pipeline using the Spark.
Engineer • Followed the test-driven approach, and software engineering practices.
• AWS Certified Cloud Practitioner • Deploying and supporting application at different environment
Technology stack: AWS EMR, S3, RDS, Confluent Kafka, PySpark(structure streaming), Boto3
Key Skills

• Languages: Python, Scala, Unix Consultant – Data Engineer, Core Compete (September’19 – Present)
• Analytics: SQL Project: SaaS analytics platform for retail promotions (AEO - Experiment)
• Big Data: PySpark, Spark, Hive, Kafka Role: Data Engineer
• Databases: Postgres, MySQL, Cosmos
Assignment: To build SaaS analytical platform to test the retail promotions and product launch
DB
across its retail stores. It involved creating batched pipelines to serve purpose of the data science
• AWS : EMR, S3, RDS, IAM
• Azure: Event hub, Stream Analytics, ADF, requirement using GCP data services.
ADL, Azure DWH, Databricks. Cosmos DB Activities:
• GCP: Biguery, Pubsub, Dataproc, Cloud • Involved in designing data pipeline, and data warehouse schema, based on querying patterns and
composer, GCS, CloudSQL requirements of the analytics team, to minimize data movement for online analytics on the GCP
• Orchestration: ADF, Airflow, Git cloud.
• Develop a configurable, client agnostic warehouse schema and data processing framework with
Education microservice architecture.
• Postgraduate Diploma in Advanced • Developed a scalable, config-driven object-oriented framework for ETL by creating wrappers
Computing, Pune, India – Aug’16 to around APIs of services like BigQuery, PubSub, GCS.
Feb’17
• Orchestrated pipeline with Airflow to performed scheduled data loads.
• Bachelor of Engineering – Electronics • Followed the test-driven approach, and software engineering practices.
and Telecommunication, Pune
Technology stack: BiqQuery, Pubsub, GCS, Python, PyTest.
University, India – Aug’12 to Jun’16
Employment History Data Engineer, Atos Global IT (March’17 – August’19)

• Core Compete – Sep’19 to present Project: ENEL Next Platform (Migrating traditional ETL pipelines into CDH)
• Atos Global IT – Mar’17 to Aug’19 Role: Data Engineer
Assignments: Migrated traditional ETL process into cloud-based Cloudera platform which involved
Personal Details building batched pipelines migrating data from Oracle DB to HBase data store.
Name – Rahul Deelip Bandewar Activities:
Address – Hadapsar, Pune • Developed transformation layer which involves validation and conversion over data using Spark.
Email – bandewarrahul@gmail.com • Preparing System Test Cases.
Phone – 9604367453 / 9420456733 Technology stack: Cloudera CDH, Spark, Scala, Flat Spec
LinkedIn -
linkedin.com/in/rahul-bandewar- Project: Codex iFabric (predictive maintenance SaaS platform for windmills)
5a988a102 Role: Data Engineer
Assignments: SaaS solution which will notify the predictive maintenance measure for the windmills
deployed across the globe based on real time streaming feeds from the sensors. Streaming pipeline
was built with azure services.
Activities:
• Involved in the planning, design, and development of batched and near real-time services using
Azure.
• Developed the batched pipeline to support data & analytics capabilities using azure services like
ADF, HDInsight, Blob Storage.
• Developed the near real-time data pipeline using capabilities of azure like event hub, azure stream
analytics, and cosmos DB.
• The transformation layer was developed using HDInsight & azure stream analytics.
Technology stack: ADF, Azure stream analytics, Event hub, HDInsight, Cosmos DB, ADL

Project: Codex Fabric (PaaS cloud product)


Role: Software Developer
Assignments: Built cloud native scalable platform used by data engineer and data scientist for
analytical purposes. Platform was compatible to AWS, Azure and tai cloud.
Activities:
• Involved in implementing security over the cloud environment by developing the effector-based
enhancement to support data encryption at rest and malware protection.
• Implementing private cloud infrastructure using open stack.
• Involved in technical support cycle.
Technology stack: bash scripting, core java, AWS, Azure, TAI cloud

You might also like