Software Engineer – Data

Location: Glendale, CA
Employment Type: Contract
Job ID: 140739
Date Added: 09/09/2024

Apply Now

Fill out the form below to submit your information for this opportunity. Please upload your resume as a doc, pdf, rtf or txt file. Your information will be processed as soon as possible.

* Required field.
Job Title: Sr Data Engineer
Location: Glendale, CA – Hybrid Onsite Schedule

The Company
Headquartered in Los Angeles, this leader in the Entertainment & Media space is focused on delivering world-class stories and experiences to it's global audience. To offer the best entertainment experiences, their technology teams focus on continued innovation and utilization of cutting edge technology.

Platform / Stack
You will work with technologies that include Python, AWS, Airflow and Snowflake.

Compensation Expectation– $180,000-$200,000k

What You'll Do As a Sr Data Engineer:
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build tools and services to support data discovery, lineage, governance, and privacy
  • Collaborate with other software/data engineers and cross-functional teams
  • Work on a Tech stack that includes Airflow, Spark, Databricks, Delta Lake, and Snowflake
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Engage with and understand our customers, forming relationships that allow us to understand and prioritize both innovative new offerings and incremental platform improvements
  • Maintain detailed documentation of your work and changes to support data quality and data governance requirements

Qualifications
You could be a great fit if you have:
  • 5+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Strong SQL skills and ability to create queries to analyze complex datasets
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query).
  • Experience in developing APIs with GraphQL
  • Deep Understanding of AWS or other cloud providers as well as infrastructure as code
  • Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices