You are on page 1of 1

There are 2 ROLES AVAILABLE:

Salary: 180-210K BASE + 10-15% BONUS + 100K EQUITY


This is a HYBRID role so relocation to the Culver City, CA area is required (No Exceptions)

Are you passionate about data engineering and up for a challenging role that will elevate your
career to the next level? Join our team and work at the cutting edge of big data, cloud
technologies, and analytics.

What You'll Do:

• Architect both single and multi-node ETL pipelines that can handle large-scale data
efficiently.
• Implement robust quality assurance measures for new and existing data pipelines.
• Create enriched datasets that serve both internal and external analytics tools.
• Efficiently resolve issues, liaising directly with internal data consumers for smooth
operations.
• Utilize scheduling and orchestration tools to automate data pipeline runs, enhancing
efficiency.
• Expertly handle massive volumes of data, ensuring optimization and performance.
• Utilize various external APIs to augment and improve our data resources.
• Facilitate analytics by establishing well-organized database tables.
• Leverage big data technologies in AWS to advance data availability and quality.

Who You Are:

• A bachelor’s degree in computer science or computer information systems is preferred.


• 5+ years of software engineering and 3+ years of hands-on experience in data engineering
with Apache Spark.
• A minimum of 3 years of experience in managing cloud-based services, primarily in AWS.
• Proficient in Python & SQL, with strong experience in DataFrame APIs like Pandas and
Spark.
• Comfortable using advanced languages and techniques including Java, Scala, and R,
along with data-optimized file formats like Parquet and Avro.
• Proficiency in SQL, with experience in RDBMS and data warehouse solutions such as
Redshift.

You might also like