Leverage our network to build your career.
Tell us about your professional DNA to get discovered by any company in our network with opportunities relevant to your career goals.
If you are a ff Venture Capital portfolio companyclaim your profile.

Senior Data Engineer



Data Science
Posted on Thursday, September 14, 2023

*Please note this role is based in India

Who We Are

Yieldmo is an advertising technology company that operates a smart exchange that differentiates and enhances the value of ad inventory for buyers and sellers. As a leader in contextual analytics, real time technology, and digital formats, we create, measure, model, and optimize campaigns for unmatched scale and performance. By understanding how each unique impression behaves and looking for patterns and performance in real time, we can drive real performance gains without relying on audience data.

Yieldmo is a fully-distributed, global company that provides the opportunity for employees to activate their entrepreneurial side . We are well-positioned for success in the new phase of adtech innovation with about 150 employees. We firmly believe that each person we bring into our team can make an impact.

What We Need

We are looking for a strong programmer who is an independent performer and curious to investigate and engineer optimized solutions that best solve the problem.

What You Can Expect In This Role

As a member of the Yieldmo data team you are expected to build innovative data pipelines for processing and analyzing our large user datasets (250 billion + events per month). A unique challenge with the role is being comfortable in developing solutions in varied technologies including: Develop custom transformation/integration apps in Python and Java, build pipelines in Spark, AWS, and analyze data/ publish insights in SQL.


  • BS or higher degree in computer science, engineering or other related field
  • 5+ years of Object Oriented Programming experience in languages such as Java, Scala, C++
  • 3+ years of experience of developing in Python to transform large datasets on distributed and cluster infrastructure
  • 5+ years of experience in engineering ETL data pipelines for Big Data Systems
  • Prior experience of designing and building ELT infrastructure involving streaming systems such as Spark, AWS EMR, EKS, ECS, AWS Glue, Airflow
  • Experience of implementing clustered/ distributed/ multi-threaded infrastructure to support data processing on Snowflake
  • Proficient in SQL. Have experience performing data transformations and data analysis using SQL
  • Comfortable in juggling multiple technologies and high priority tasks

Hiring Process

Select candidates will be invited to schedule a 30 minute screening call with a member of our Talent Acquisition team. We will discuss the Hiring Process details at that time. The hiring process typically includes, but is not limited to:

  • Two 60min code pairing round where the candidate is expected to engage in writing solution in code. This will be both in Python and SQL
  • One 60 min design session, where the candidate is expected to share various data systems design options to solve proposed big data problem
  • A 30 minute video interview with the Hiring Manager

Nice to Haves

  • Ad tech experience


  • Fully remote workplace
  • Generous employer contribution to Health Benefit premiums & 401k Match
  • Work/life balance: flexible PTO, competitive compensation packages, Summer Fridays & much more
  • 1 Mental Escape (ME) day each quarter to fully unplug and recharge
  • A generous learning stipend and other opportunities for professional development
  • Dedicated staff committed to diversity and inclusion
  • An allowance to help you upgrade your home office