Principal Data Engineer – Sayari


Sayari

Remote - United States

Background

Sayari is the transparency company providing the public and private sectors with immediate visibility into complex commercial relationships by delivering the largest commercially available collection of corporate and trade data as a dynamic model of global ownership and trade activity. Sayari’s solutions harness this model to enable risk resilience, complex investigations, and clear-eyed business decisions. Sayari is headquartered in Washington, D.C., and its solutions are used by thousands of frontline analysts in over 35 countries.
Our company culture is defined by a dedication to our mission of using open data to enhance visibility into global commercial and financial networks, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you like working with supportive, high-performing, and curious teams, Sayari is the place for you.

Job Summary

We are looking for a Principal Data Engineer to join our Data Resolution team and serve as a technical anchor for our most complex data challenges. In this role, you will be a “player-coach,” spending the majority of your time (70%) hands-on with Spark and graph data logic while dedicating the remainder of your time to system architecture, design planning, and technical mentorship. You will be instrumental in evolving our graph build pipelines, optimizing our cloud footprint, and overseeing the long-term planning and execution of major data pipeline re-architectures. This is a high-impact role where your work directly powers the data products used by global systems defenders.

Responsibilities

  • Design and implement complex Spark data logic, focusing on performance optimization, data volume tuning, and robust execution.
  • Own the architectural design of graph build pipelines, ensuring they are scalable, automated, and highly resilient.
  • Plan and oversee the strategic re-architecture of data pipelines to meet evolving business needs and scale.
  • Optimize infrastructure-as-code and schema designs to reduce cloud costs and improve pipeline latency.
  • Act as a technical consultant for the team, fostering a collaborative and engineer-led approach to design decisions.
  • Support the development of the engineering team through code reviews, design docs, and architectural best practices.
  • Ensure the accuracy of mission-critical data outputs.

Requirements

Required Skills & Experience

  • 8+ years of experience in the big data space, with a proven track record of implementing large-scale features and leading process redesigns.
  • Expert-level mastery of Apache Spark for large-scale data processing.
  • Strong experience with orchestration tools (Airflow) and cloud computing environments.
  • Hands-on experience architecting and managing data flows into databases such as Elasticsearch, Memgraph, and Cassandra.
  • Demonstrated ability in system architecture, including Infrastructure as Code (IaC) and schema design.
  • A “builder” mindset with experience evolving and improving existing architectures to meet new scale requirements.

Preferred Skills & Experience

  • Experience working specifically with graph data or graph databases.
  • Prior experience with entity resolution or identity resolution systems.
  • Experience evaluating and selecting modern analytical database architectures.

Interested Applicants

Apply here.