Swoop

Data Engineer, MyHealthTeam

Swoop, a market leader in privacy-safe, award-winning omnichannel healthcare marketing, connects patients, healthcare providers (HCPs), and brands at scale across all channels. Our teams leverage the power of AI-driven technology combined with real-world data (RWD), first- and zero-party data, and engagement data, to empower pharmaceutical marketers to make faster, more precise decisions that improve patient outcomes.  

 

At Swoop, our mission is to create a future where technology seamlessly connects patients and HCPs in a privacy-safe way, improving the patient journey and driving better health outcomes. Since achieving independence in 2024, Swoop has experienced significant growth and demonstrated an unwavering commitment to innovation, talent development, and enhancing the patient experience. Our acquisition of MyHealthTeam in January 2025 brought vibrant social communities into our omnichannel suite, further bridging the gap between healthcare brands and patients for more impactful and targeted engagement. 

 

We believe our people are our greatest asset. Swoop fosters a culture of innovation and continuous learning, providing employees with rich opportunities for professional growth. This commitment to our team earned us the "Best Places to Work" recognition from Business Intelligence Group in 2025, based on a survey of over 100 employees. We are driven by a patient-first philosophy and are passionate about leveraging technology to create a healthier future. 

 

If you're a driven professional seeking to make a real difference in healthcare marketing at a fast-growing, innovative company, join Swoop and help us revolutionize how brands connect with patients and HCPs. 


About the role


We’re looking for a mid- to senior-level Data Engineer to join our growing Data Team, reporting to the Senior Director of Data Engineering. You’ll design and maintain data pipelines, models, and infrastructure that power analytics, insights, and personalization for millions. Our modern stack includes AWS, Redshift, Databricks, Airflow, and Spark, with Python and SQL as core languages. This is a key role in delivering scalable, reliable data solutions across the organization.


What you'll do

  • Develop scalable, cloud-based data pipelines integrated with data warehouses like Redshift or Snowflake.
  • Orchestrate data workflows using tools like Airflow, Prefect, and Dagster to ensure reliable pipeline execution.
  • Design, optimize, and maintain data models, SQL queries, and Spark-based data processing workflows.
  • Work across data lakehouse and warehouse systems to deliver clean, reliable, and analytics-ready data.
  • Ensure high standards of data quality, governance, and security.
  • Collaborate cross-functionally with the Data Analytics, Product, and Marketing teams to gather and improve data to enable actionable insights.
  • Monitor and optimize performance of data systems and troubleshoot issues as needed.
  • This is an individual contributor role reporting to the Senior Director of Data Engineering.

Qualifications

  • Minimum eight (8) years of experience in data engineering, with a strong focus on ETL/ELT pipelines, cloud platforms, and data modeling
  • Hands-on experience designing and managing scalable, cloud-native data pipelines using AWS services such as S3, Redshift, Glue, and Lambda
  • Strong background in utilizing Databricks and Apache Spark to perform large-scale data transformation, processing, and analytics
  • Strong programming skills in Python and SQL
  • Solid understanding of data modeling, warehousing, and pipeline performance tuning
  • Experience delivering production-ready ETL/ELT pipelines
  • Infrastructure-as-code experience (Terraform, CloudFormation) desired
  • Familiarity with containerization (Docker, Kubernetes) desired
  • Analytical, detail-oriented, and adaptable with a strong sense of ownership
  • Skilled in cross-functional collaboration, stakeholder alignment, and clear communication

The pay range for this role is:

130,000 - 170,000 USD per year (Remote (United States))

R&D

Remote (United States)

Share on:

Terms of servicePrivacyCookiesPowered by Rippling