Professional Documents
Culture Documents
• Languages: Python, Scala, Unix Consultant – Data Engineer, Core Compete (September’19 – Present)
• Analytics: SQL Project: SaaS analytics platform for retail promotions (AEO - Experiment)
• Big Data: PySpark, Spark, Hive, Kafka Role: Data Engineer
• Databases: Postgres, MySQL, Cosmos
Assignment: To build SaaS analytical platform to test the retail promotions and product launch
DB
across its retail stores. It involved creating batched pipelines to serve purpose of the data science
• AWS : EMR, S3, RDS, IAM
• Azure: Event hub, Stream Analytics, ADF, requirement using GCP data services.
ADL, Azure DWH, Databricks. Cosmos DB Activities:
• GCP: Biguery, Pubsub, Dataproc, Cloud • Involved in designing data pipeline, and data warehouse schema, based on querying patterns and
composer, GCS, CloudSQL requirements of the analytics team, to minimize data movement for online analytics on the GCP
• Orchestration: ADF, Airflow, Git cloud.
• Develop a configurable, client agnostic warehouse schema and data processing framework with
Education microservice architecture.
• Postgraduate Diploma in Advanced • Developed a scalable, config-driven object-oriented framework for ETL by creating wrappers
Computing, Pune, India – Aug’16 to around APIs of services like BigQuery, PubSub, GCS.
Feb’17
• Orchestrated pipeline with Airflow to performed scheduled data loads.
• Bachelor of Engineering – Electronics • Followed the test-driven approach, and software engineering practices.
and Telecommunication, Pune
Technology stack: BiqQuery, Pubsub, GCS, Python, PyTest.
University, India – Aug’12 to Jun’16
Employment History Data Engineer, Atos Global IT (March’17 – August’19)
• Core Compete – Sep’19 to present Project: ENEL Next Platform (Migrating traditional ETL pipelines into CDH)
• Atos Global IT – Mar’17 to Aug’19 Role: Data Engineer
Assignments: Migrated traditional ETL process into cloud-based Cloudera platform which involved
Personal Details building batched pipelines migrating data from Oracle DB to HBase data store.
Name – Rahul Deelip Bandewar Activities:
Address – Hadapsar, Pune • Developed transformation layer which involves validation and conversion over data using Spark.
Email – bandewarrahul@gmail.com • Preparing System Test Cases.
Phone – 9604367453 / 9420456733 Technology stack: Cloudera CDH, Spark, Scala, Flat Spec
LinkedIn -
linkedin.com/in/rahul-bandewar- Project: Codex iFabric (predictive maintenance SaaS platform for windmills)
5a988a102 Role: Data Engineer
Assignments: SaaS solution which will notify the predictive maintenance measure for the windmills
deployed across the globe based on real time streaming feeds from the sensors. Streaming pipeline
was built with azure services.
Activities:
• Involved in the planning, design, and development of batched and near real-time services using
Azure.
• Developed the batched pipeline to support data & analytics capabilities using azure services like
ADF, HDInsight, Blob Storage.
• Developed the near real-time data pipeline using capabilities of azure like event hub, azure stream
analytics, and cosmos DB.
• The transformation layer was developed using HDInsight & azure stream analytics.
Technology stack: ADF, Azure stream analytics, Event hub, HDInsight, Cosmos DB, ADL