Senior Database Engineer

At MarketOnce, we empower businesses with the insights and strategies they need to excel in today's dynamic market. With a strong foundation in market research, we offer innovative solutions in research, software, consulting, advertising, and marketing to corporate, private equity firms, and other organizations seeking to achieve their goals. 

 

Our team is distinguished by their client-centric approach—treating each client's business with the dedication and care as if it were their own. This commitment enables us to deliver personalized service and achieve the highest standards of success and innovation in everything we do. Together, our family of companies, including MarketOnce, ROI Rocket, and eAccountable, work towards delivering unparalleled solutions. Headquartered in Denver, Colorado, our global team collaborates from locations across the US and Europe. 

 

We value curiosity, creativity, collaboration, and expertise, continuously striving to push boundaries and exceed our clients' expectations. Join us to be part of a culture that drives meaningful results. 


About the Opportunity:

As a Sr. Database Engineer, you’ll join a close-knit team building the future of a platform designed to power immersive digital experiences for our customers and partners. In this role, you will leverage deep technical expertise to design, optimize, and operationalize large-scale data solutions spanning distributed compute, datalake, Relational and Vector databases that ensure high performance, scalability, and reliability. You'll work across the full modern data stack — from Lakehouse foundations to real-time inference — enabling advanced analytics, semantic search, and AI-driven capabilities at scale.


What You'll Do:

  • Analyze and optimize production data solutions to identify and resolve issues related to performance, locking, and scalability
  • Write and optimize complex SQL across the full stack — Spark SQL for distributed transformations, PostgreSQL for transactional and analytical workloads, and Delta Lake for versioned, ACID-compliant table management
  • Design and build large-scale Spark and Azure Data Factory pipelines for batch and streaming ingestion, transformation, and feature engineering — leveraging both PySpark and Spark SQL for distributed processing at large scale
  • Design and develop data lake architectures including Medallion Architecture to support advanced analytics and large-scale data ingestion
  • Build and maintain production-grade data pipelines for efficient data movement and transformation across systems. Managing Delta table lifecycles, schema evolution, Z-ordering, time travel, and Change Data Feed to support reliable, performant analytics and OLTP and ML workloads
  • Design and operate hybrid storage patterns combining PostgreSQL for transactional workloads — with optimized schemas, indexes, CTEs, window functions, and partitioning — alongside Delta Lake Lakehouse layers for analytical and ML workloads
  • Design and implement reporting solutions that deliver actionable insights to business stakeholders
  • Communicate database and data architecture designs to business and technical audiences, including business users, program sponsors, database administrators, ETL and BI developers
  • Evaluate potential technology/tool solutions that meet business needs and facilitate internal and external discussions towards desirable outcomes
  • Collaborate with solution architects and project resources on systems integration and compatibility, while acting as a leader in coaching, training, and providing guidance
  • Create functional and technical documentation related to data architecture and business intelligence solutions
  • Provide technical consulting to application development teams during application design and development for highly complex or critical projects
  • Design data governance procedures to ensure compliance with internal and external regulations 


What We’re Looking For:

  • 7+ years in data engineering, big data platforms, or a related discipline with hands-on production experience at scale
  • Located in Eastern or Central time zone; you will work extensively with a team member in the UK
  • Proven experience analyzing and tuning production database systems for performance and reliability
  • Expert-level SQL skills — complex joins, CTEs, window functions, query plan analysis, and optimization across both OLTP (PostgreSQL) and distributed engines (Spark SQL, Databricks SQL, Delta Lake)
  • Hands-on experience with data lake technologies and data pipeline frameworks (e.g., Azure Data Lake, Azure Data Factory, Databricks)
  • Deep expertise in Apache Spark — DataFrames, Spark SQL, UDFs, partitioning, broadcast strategies, and hands-on performance tuning experience
  • Data warehouse and visualization experience, including demonstrated strong Logical, Physical, and Dimensional Modeling skills
  • Solid command of Delta Lake internals — transaction log, schema enforcement, schema evolution, time travel, CDF, Z-ordering, liquid clustering, and OPTIMIZE/VACUUM operations
  • Strong PostgreSQL experience — schema design, indexing strategies, partitioning, EXPLAIN/ANALYZE tuning, and extensions including pgvector for similarity search
  • Strong Python skills — PySpark, pandas, async programming, building production data utilities
  • Strong understanding of reporting and BI solution design, including Power BI or similar tools
  • Experience in design of technology roadmaps and the transition from the current architectural framework to target architecture of the future
  • Excellent verbal and written communication skills to document and present data models, strategies, standards, and concepts to both business and IT audiences
  • Experience with the following tech & tools: 
    • SQL Engines: Azure SQL Server, Spark, Cosmos DB, PostgreSQL, Elastic
    • Azure Technologies: Cosmos DB, SQL Database, Analytics, Azure Databricks, Data Factory, Fabric, Power BI, Azure Data Lake
    • ETL Tools: Azure Data Factory, Azure Databricks, Azure Stream Analytics
    • Lakehouse: Delta Lake, Delta Live Tables
    • LANGUAGES: SQL, Python, PySpark


What We Offer:

  • Competitive base salary: $150,000-$180,000/year
  • Flexible vacation policy – take the time you need to recharge
  • Comprehensive health, vision & dental insurance
  • 401k with company contribution
  • Opportunity for career progression with plenty of room for personal growth


What to Expect:

  • 1st Round: 30-45 minute interview with the Recruiter
  • 2nd Round: 45-minute interview with the Hiring Manager (Technical conversational)
  • 3rd Round: 45-minute interview with Tech Leader (Problem solving)

We do not work with outside recruiting agencies.


MarketOnce will accept applications for this role on an ongoing basis

MarketOnce is an Equal Opportunity Employer. We believe in creating a diverse and inclusive workplace where everyone has the opportunity to thrive. We are committed to hiring individuals based on their skills and qualifications, regardless of race, gender, age, sexual orientation, disability, or any other characteristic. We welcome and encourage applications from all backgrounds.

Technology

Remote (United States)

Partager sur :

Conditions d’utilisationConfidentialitéCookiesAlimenté par Rippling